The matrix vector product, or summation operation, lies at the core of most scientific computation. The product of a general dense $N\times N$ matrix with a $N$ vector requires $O(N^2)$ operations and $O(N^2)$ memory. One of the major achievements of computational harmonic analysis, the Fast Fourier Transform, reduced the complexity of this operation for uniformly sampled Fourier matrices to $O(N\log N)$ operations, and had significant scientific and industrial impacts. The fast multipole method, invented by Vladimir Rokhlin and Leslie Greengard in the late 1980s, allows the computation of the matrix vector product to a specified precision for a large class of dense matrices. Since its introduction it has had significant successes in the solution of integral equations of potential theory, of sums of special functions such as Gaussians, in nonuniform fast-Fourier transforms, and others.
In this talk we introduce the key ideas of the method, and describe some open questions and research areas in the method.