Abstract:

One-bit quantization is a method of representing bounded signals by {+1,-1} sequences that are computed from regularly spaced samples of these signals; as the sampling density is increased, convolving these one-bit sequences with appropriately chosen averaging kernels must produce increasingly close approximations of the original signals. This method is widely used for analog-to-digital conversion of audio signals because of the many advantages its implementation presents over the classical and more familiar method of fine-resolution quantization. Despite its popular use, one-bit quantization is not well-understood in the approximation theoretical context. A fundamental open problem is the determination of the best possible behavior of the approximation error as a function of the sampling density for various function classes, and most importantly for the class of bandlimited functions, which is a model space for audio signals. Some of the other open problems ask for precise error bounds for particular popular one-bit quantization algorithms.

In this talk, we present the recent progress towards the solution of these problems, and the interplay of various types of mathematics in achieving these results. In particular, we give the first one-bit quantization algorithm that provides exponential accuracy for the class of bandlimited functions.