May 17th, 2021 
Darrin Speegle (Saint Louis University) 
The existence of wavelet sets for matrixlattice pairs 
A wavelet set is a measurable subset of R^n which tiles R^n simultaneously via translations along a lattice and via dilations via powers of an invertible matrix. We are interested in classifying the matrixlattice pairs for which wavelet sets exist. Previous results have mostly focused on the relationship between the eigenvalues/eigenvectors and the lattice.
We provide a new necessary condition and a new sufficient condition on the interaction between a matrix dilation and a lattice for a wavelet set to exist. The difference between the conditions is a wellstudied phenomenon in Diophantine approximation. Applications are given to the case that all eigenvalues are at least one in modulus and the case where exactly one eigenvalue is less than one in modulus. An explicit pair is constructed in dimension 3 which satisfies neither the necessary nor sufficient condition.

May 10th, 2021 
Min Wu (UMD) 
Exploiting MicroSignals for Media and Physiological Forensics 
A variety of nearly invisible “microsignals” have played important roles in media security and forensics. Often considered as noise or interference, these microsignals are ubiquitous and typically an order of magnitude lower in strength or scale than the dominant ones, and they are traditionally removed or ignored as nuances outside the forensic domain. This talk will highlight two types of microsignals from our information forensic research, and show the recent trend harnessing microsignals for wellness and healthcare. I will discuss the technical challenges in exploring microsignals as well as crossdisciplinary opportunities and applications.

May 3rd, 2021 
Alfonso Bandeira (Zurich) 
Computational Hardness of Hypothesis Testing and Quiet Plantings 
When faced with a data analysis, learning, or statistical inference problem, the amount and quality of data available fundamentally determines whether such tasks can be performed with certain levels of accuracy. With the growing size of datasets however, it is crucial not only that the underlying statistical task is possible, but also that it is doable by means of efficient algorithms. In this talk we will discuss methods aiming to establish limits of when statistical tasks are possible with computationally efficient methods or when there is a fundamental «StatisticaltoComputational gap›› in which an inference task is statistically possible but inherently computationally hard. We will focus on Hypothesis Testing and the ``Low Degree Method'' and also address hardness of certification via ``quiet plantings''. Guiding examples will include Sparse PCA, bounds on the SherringtonKirkpatrick Hamiltonian, and lower bounds on Chromatic Numbers of random graphs.

April 19th, 2021 
Joel Tropp (Caltech) 
Scalable semidefinite programming 
Semidefinite programming (SDP) is a powerful framework from convex optimization that has striking potential for data science applications. This talk describes a provably correct randomized algorithm for solving large, weakly constrained SDP problems by economizing on the storage and arithmetic costs. Numerical evidence shows that the method is effective for a range of applications, including relaxations of MaxCut, abstract phase retrieval, and quadratic assignment problems. Running on a laptop equivalent, the algorithm can handle SDP instances where the matrix variable has over 10^14 entries.
This talk will highlight the ideas behind the algorithm in a streamlined setting. The insights include a careful problem formulation, design of a bespoke optimization method, and use of randomized matrix computations.

April 12th, 2021 
Gestur Olafsson (Louisiana State University) 
Atomic decomposition of Bergman spaces 
In 1980 Coifman and Rochberg gave a quite general atomic decomposition of Bergman spaces of Lpspaces of holomorphic functions on unbounded symmetric domains. In the article, it was pointed out that they were not able to transfer those results to the bounded realization of those domains.
In this talk we discuss some general facts about Bergman spaces and then present the results by Coifman and Rochberg. We then turn our attention to bounded domains, but to avoid too much notation we will concentrate on simple examples. Our toolbox contains some basic facts about projective representations but we will keep that at minimum. By applying representation/coorbit theory we obtain a large family of new atoms, including those of Coifman and Rochberg. Our approach also allows us to describe the relation between atoms for the bounded and unbounded realizations of the domain thus solving one of the issues raised by Coifman and Rochberg.

April 5th, 2021 
Yurii Lyubarskii (NTNU, St. Petersburg University) 
Gabor analysis for rational functions 
Do all α, β such that αβ≤1 belong to the frame set of g?
Up to 2011 only few such functions g (up to translation, modulation, dilation and Fourier transform) were known. In 2011 K. Grochenig and J. Stockler extended this class by including the totally positive functions of finite type (uncountable family yet depending on finite number of parameters) and later added the Gaussian finite type totally positive functions.
We suggest another approach to the problem and prove that all Herglotz rational functions with imaginary poles also belong to this class. This approach also gives new results for general rational functions.

March 29th, 2021 
Chris Heil (Georgia Tech) 
Absolute Continuity and the BanachZaretsky Theorem 
This talk is based on a chapter that I wrote for a book in honor of John Benedetto's 80th birthday, and this talk is dedicated to him. Years ago, John wrote a text "Real Variable and Integration", published in 1976. This was not the text that I first learned real analysis from, but it became an important reference for me. A later revision and expansion by John and Wojtek Czaja appeared in 2009. Eventually, I wrote my own real analysis text, aimed at students taking their first course in measure theory. My goal was that each proof was to be both rigorous and enlightening. I failed (in the chapters on differentiation and absolute continuity). I will discuss the real analysis theorem whose proof I find the most difficult and unenlightening. But I will also present the BanachZaretsky Theorem, which I first learned from John's text. This is an elegant but often overlooked result, and by using it I (re)discovered enlightening proofs of theorems whose standard proofs are technical and difficult. This talk will be a tour of the absolutely fundamental concept of absolute continuity from the viewpoint of the BanachZaretsky Theorem.

March 22nd, 2021 
Doug Cochran (Arizona State University) 
Coherence of Subspaces 
Several notions of the coherence of a collection of K elements of an Ndimensional vector space V have been defined over the past three decades or so. Many of these have been motivated by their utility in various application areas, including compressive sensing, dictionary learning, and multichannel signal detection. The vectors in the collection are often taken to have unit norm, and the coherence values are functions of the K onedimensional subspaces that are their respective spans; i.e., functions of K points in projective space. This presentation starts from a notion of coherence that arises in connection with multichannel signal detection where typically K much larger than N. It proceeds, through the use of exterior algebra, to extend this notion to collections of pdimensional subspaces; i.e. the coherence is a function of a collection of L points on the Grassmannian G(p,V). Properties of this “pcoherence” and its relationship to canonical coordinates will be discussed.

March 15th, 2021 
Rene Vidal (John Hopkins University) 
Dual Principal Component Pursuit 
We consider the problem of learning a union of subspaces from data corrupted by outliers. Stateoftheart methods based on convex l1 and nuclear norm minimization require the subspace dimensions and the number of outliers to be sufficiently small. In this talk I will present a nonconvex approach called Dual Principal Component Pursuit (DPCP), which can provably learn subspaces of high relative dimension and tolerate a large number of outliers by solving a nonconvex l1 minimization problem on the sphere. Specifically, I will present both geometric and probabilistic conditions under which every global solution to the DPCP problem is a vector in the orthogonal complement to one of the subspaces. Such conditions show that DPCP can tolerate as many outliers as the square of the number of inliers. I will also present various optimization algorithms for solving the DPCP problem and show that a Projected SubGradient Method admits linear convergence to the global minimum of the underlying nonconvex and nonsmooth optimization problem. Experiments show that the proposed method is able to handle more outliers and higher relative dimensions than stateoftheart methods. This is joint work with Tianjiao Ding, Daniel Robinson, Manolis Tsakiris and Zhihui Zhu.

March 8th, 2021 
Rodolfo Torres (University of California, Riverside) 
On Compactness of Commutators of Multiplication and Bilinear Pseudodifferential Operators and a New Subspace of BMO 
It is known that the compactness of the commutators of bilinear homogeneous Caldero ´nZygmund operators with pointwise multiplication when acting on product of Lebesgue spaces is characterized by the membership of the multiplying function in CMO. This space is the closure in BMO of its subspace of smooth functions with compact support. It is shown in this work that, for bilinear Caldero ´nZygmund operators arising from smooth (inhomogeneous) bilinear Fourier multipliers or bilinear pseudodifferential operators, one can actually consider multiplying functions in a new subspace of BMO larger than CMO. In this talk we will recall some introductory material to put the work in context and then present the new results. The new results are joint work with Qingying Xue from Beijing Normal University.

March 1st, 2021 
Marcin Bownik (University of Oregon) 
Parseval wavelet frames on Riemannian manifolds 
In this talk we discuss how to construct Parseval wavelet frames in L2(M) for a general Riemannian manifold M. We also show the existence of wavelet unconditional frames in Lp(M). This construction is made possible thanks to smooth orthogonal projection decomposition of the identity operator on L2(M), which is an operator version of a smooth partition of unity. We also show some applications such as a characterization of TriebelLizorkin Fsp,q(M)$ and Besov Bsp,q(M) spaces on compact manifolds in terms of magnitudes of coefficients of Parseval wavelet frames. This talk is based on a joint work with Dziedziul and Kamont.

February 15th, 2021 
Akram Aldroubi (Vanderbilt University) 
CUR Matrix Decomposition, Subspace Segmentation, and Motion Tracking 
The subspace segmentation problem seeks to classify, or cluster, data in a highdimensional space that is drawn from the union of much smaller dimensional subspaces. It has several applications including motion tracking in videos. One method for subspace segmentation is to find a similarity matrix from the data which can be used to identify the subspace clusters. I will introduce and discuss the CUR decomposition (Column U Row), and describe how most of the known similarity matrix methods are special cases of a general method that uses CUR in the case that the subspaces are independent. I will show how this CUR decomposition can be used for motion tracking applications.

February 8th, 2021 
Alfred Hero (University of Michigan) 
Sparse Matrix Normal Approximations 
The sparse matrix normal approximation to a random tensor valued variable is a sparse low rank approximation to the population covariance matrix. The approximation fits low dimensional kronecker factors that can be interpreted as modal covariances of each mode of the tensor. The approximation has been formulated for approximating the covariance matrix and for approximating its inverse with both Kronecker products and Kronecker sums of factors. After introducing and illustrating the general framework, we will present a new approximation that is based on a generalized Sylvester representation that is exact for spatiotemporal random fields obeying a Poisson equation. These approximations will be illustrated for the application of predicting the spatiotemporal evolution of solar active regions (sunspots) and flares from NASA SDO image sensor data.

February 1st, 2021 
Michael Lacey (Georgia Tech) 
Improving Inequalities for Discrete Averages 
In the continuous setting, improving inequalities are well known, and with a long history. The theory for averages over surfaces (of co dimension one) is well understood. There is a new emerging theory for discrete averages. Their properties depend upon subtle aspects of the continuous averages, as well as the periodic variants. I will provide an introduction to the subject, illustrating how the different aspects combine in this subject.

January 25th, 2021 
Andrea Bertozzi (UCLA) 
PseudoSpectral Methods for High Dimensional Data Analysis on Graphs 
I will speak about a general class of machine learning problems in which data lives on similarity graphs and the goal is to solve a penalized graph mincut problem. Applications include semisupervised learning, unsupervised learning, and modularity optimization – originally developed
for community detection on networks – but recast as an unsupervised
machine learning problem. These problems have a mathematical connection
to total variation minimization in Euclidean space and this analogy leads
to a natural class of machine learning algorithms that mimic
pseudospectral methods in nonlinear partial differential equations. The
methods are graph analogues of geometric motion – e.g. Motion by mean
curvature and the MBO scheme to approximate that dynamics.

January 4th, 2021 
Qiyu Sun (University of Central Florida) 
Some Mathematical Problems in Graph Signal Processing 
Graph signal processing provides an innovative framework to handle data residing on various networks and many irregular domains. It is an emerging interdisciplinary field that merges algebraic and spectral graph theory with applied and computational harmonic analysis. In this talk, I will discuss some mathematical problems related to graph signal processing, with emphasis on phase retrieval and velocity field, filtering and inverse filtering, sampling and reconstruction, and distributed verification, implementation and optimization.

December 21st, 2020 
Vivek Goyal (Boston University) 
One Click at a Time: Photon and ElectronLevel Modeling for Improved Imaging 
Detectors that are capable of sensing a single photon are no longer rare, yet the bulk of signal processing intuitions and methods have an implicit connection with Gaussian noise models. Particlelevel modeling can lead to substantially different methods that sometimes perform dramatically better than classical methods. For example, using detectors with singlephoton sensitivity enables lidar systems to form depth and reflectivity images at very long ranges. Initially, our interest was in exploiting inhomogeneous Poisson process models and the typical structure of natural scenes to achieve extremely high photon efficiency. However, modeling at the level of individual photons does not merely give advantages when signals are weak. It is also central to withstanding high levels of ambient light and mitigating the effects of detector dead time, which ordinarily create high bias in highflux imaging. Our sensor signal processing advances thus potentially improve lidar performance in settings with very high dynamic range of optical flux, such as navigation of autonomous vehicles. Modeling of dead time can presumably improve many other applications of timecorrelated singlephoton counting. Furthermore, modeling at the level of individual incident particles and emitted secondary electrons leads to improvements in focused ion beam microscopy that apply uniformly over all dose levels.

December 14th, 2020 
Gil Strang (MIT)) 
The ColumnRow Factorization of a Matrix A = CR [Slides] 
Matrix factorizations like A = LU and A = USV^{T} have become the organizing principles of linear algebra. This expository paper develops a columnrow factorization A = CR = (m × r) (r × n) for any matrix of rank r. The matrix C contains the first r independent columns of A : a basis for the column space. The matrix R contains the nonzero rows of the reduced row echelon form rref(A). Then R = [I F] P contains a matrix F to express the remaining n  r columns of A as combinations CF of the independent columns in C. When the independent columns don’t all come first, P permutes those columns of I and F into their correct positions so that CR = [C CF] P produces A.
A = CR is an “interpolative decomposition” that includes r actual columns of A in C.
A more symmetric factorization A = C W^{1} R^{*} also includes r actual rows of A in R^{*}.

November 30th, 2020 
Carlos Cabrelli (University of Buenos Aires) 
Frames by Operator Orbits 
I will review some results on the question of when the orbits {T^{j} g: j ∈ J, g ∈ G}, of a bounded operator T acting on a Hilbert space H where G is a subset of H form a frame of H. I will also comment on recent advances. This is motivated by the Dynamical Sampling problem that consists of recovering a timeevolving signal from its spacetime samples.

November 23rd, 2020 
Tomaso Poggio (MIT) 
Deep Puzzles: Towards a Theoretical Understanding of Deep Learning 
Very recently, square loss has been observed to perform well in classification tasks with deep networks. However, a theoretical justification is lacking, unlike the crossentropy case for which an asymptotic analysis is available. Here we discuss several observations on the dynamics of gradient flow under the square loss in ReLU networks. We show how convergence to a local minimum norm solution is expected when normalization techniques such as Batch Normalization (BN) or Weight Normalization (WN) are used, in a way which is similar to the behavior of linear degenerate networks under gradient descent (GD), though the reason for zeroinitial conditions is different. The main property of the minimizer that bounds its expected error is its norm: we prove that among all the interpolating solutions, the ones associated with smaller Frobenius norms of the weight matrices have better margin and better bounds on the expected classification error. The theory yields several predictions, including aspects of Donoho's Neural Collapse and the bias induced by BN on the weight matrices towards orthogonality.

November 16th, 2020 
Jean Pierre Gabardo (McMaster University) 
Factorization of positive definite functions through convolution and the Turan problem 
An open neighborhood U of 0 in Euclidean space is called symmetric if U=U. Let PD(U) be the class of continuous positive definite functions supported on U and taking the value 1 at the origin. The Turan problem for U consists in computing the Turan constant of U, which is the supremum of the
integrals of the functions in PD(U). Clearly, this problem can also be stated on any locally compact abelian group. In this talk, we will introduce the notion of "dual" Turan problem. In the case of a finite abelian group G, the Turan problem for a symmetric set S consists thus in maximizing the integral (which is just a finite sum) over G of the positive definite functions taking the value 1 at 0 and supported on S, while its dual is just the Turan problem for the set consisting of the complement of S together with the origin. We will show a surprising relationship between the maximizers of the Turan problem and those of the dual problem. In particular, their convolution product must be identically 1 on G. We then extend those results to Euclidean space by first finding an appropriate notion of dual Turan problem in this context. We will also point out an interesting connection between the Turan problem and frame theory by characterizing socalled Turan domains as domains admitting Parseval frames of (weighted) exponentials of a special kind.

November 9th, 2020 
Jill Pipher (Brown University) 
Boundary value problems for elliptic complex coefficient systems: the pellipticity condition 
Formulating and solving boundary value problems for divergence form real elliptic equations has been an active and productive area of research ever since the foundational work of De Giorgi  Nash  Moser established Holder continuity of solutions when the coefficients are merely bounded and measurable. The solutions to such realvalued equations share some important properties with harmonic functions: maximum principles, Harnack principles, and estimates up to the boundary that enable one to solve Dirichlet problems in the classical sense of nontangential convergence. Solutions to complex elliptic equations and elliptic systems do not necessarily share these good properties of continuity or maximum principles.
In joint work with M. Dindos, we introduce in 2017 a structural condition (pellipticity) on divergence form elliptic equations with complex valued matrices which was inspired by a condition related to Lp contractivity due to Cialdea and Maz'ya. The pellipticity condition that generalizes CialdeaMaz'ya was also simultaneously discovered by CarbonaroDragicevic, who used it to prove a bilinear embedding result. Subsequently, Feneuil  Mayboroda  Zhao have used pellipticity to study wellposedness of a degenerate elliptic operator associated with domains with lowerdimensional boundary.
In this seminar, we discuss pellipticity for complex divergence form equations, and then describe recent work, joint with J. Li and M. Dindos, extending this condition to elliptic systems. In particular, we can give applications to solvability of Dirichlet problems for the Lame systems.

November 2nd, 2020 
Ursula Molter (University of Buenos Aires) 
Riesz Bases of Exponentials and the Bohr Topology 
In this talk we address the question of what domains Ω of R^{d} with finite measure, admit a Riesz basis of exponentials, that is, the existence of a discrete set B ⊂ R^{d} such that the exponentials E(B) = {e^{2piß·ω} : ß ∈ B} form a Riesz basis of L^{2}(Ω). Using the Bohr compactification of the integers, we show a necessary and sufficient condition to ensure that a multitile Ω subset of R^{d} of positive measure (but not necessarily bounded) admits a structured Riesz basis of exponentials for L^{2}(Ω). Here a set Ω ⊂ R^{d} is a kmultitile for Z^{d} if Σ_{λ ∈ Zd} Χ_{Ω}(ω  λ) = k a.e. ω ∈ R^{d}. 
October 26th, 2020 
Virginia Naibo (Kansas State University) 
Fractional Leibniz Rules: A Guided Tour 
The usual Leibniz rules express the derivative of a product of functions in terms of the derivatives of each of the factors. In an analogous sense, fractional Leibniz rules involve the concept of fractional derivative and provide estimates of the size and smoothness of a product of functions in terms of the size and smoothness of each of the factors. These bilinear estimates stem from the study of partial differential equations such as Euler, Navier Stokes and Kortewegde Vries. In this talk, I will present fractional Leibniz rules associated to bilinear pseudodifferential operators with homogeneous symbols, including CoifmanMeyer multipliers, and with symbols in the bilinear Hörmander classes. Through different approaches, the estimates will be discussed in the settings of weighted Lebesgue, TriebelLizorkin and Besov spaces. 
October 19th, 2020 
Ronald Coifman (Yale) 
Phase Unwinding Analysis: Nonlinear Fourier Transforms and Complex Dynmaics 
Our goal here is to introduce recent developments of analysis of highly oscillatory functions. In particular we will sketch methods extending conventional Fourier analysis, exploiting both phase and amplitudes of holomorphic functions. The miracles of nonlinear complex holomorphic analysis, such as factorization and composition of functions lead to new versions of holomorphic orthonormal bases , relating them to multiscale dynamical systems, obtained by composing Blaschke factors.
We also, remark, that the phase of a Blaschke product is a onelayer neural net with (arctan as an activation sigmoid) and that the composition is a "Deep Neural Net" whose depth is the number of compositions, our results provide a wealth of related libraries of orthogonal bases . We will also indicate a number of applications in medical signal processing , as well in precision Doppler. Each droplet in the phase image below represent a unit of a two layers deep net and gives rise to an orthonormal basis the Hardy space 
October 12th, 2020 
Alex Iosevich (University of Rochester) 
Finite Point Configurations and Applications to Frame Theory 
We are going to discuss some recent developments in the study of finite point configuration in sets of a given Hausdorff dimension. We shall also survey some applications of the finite point configuration machinery to the problems of existence and nonexistence of exponential/Gabor bases and frames. 