From megapixel imagery to DNA microarrays, modern data collection methods are creating data sets with higher and higher dimensionality. Despite the high ambient dimensionality, most data sets exhibit much lower complexity. For example, it is well-known that natural images admit sparse approximations in wavelet bases. The existence of sparse approximations indicates that the lower complexity of the data is in part due to the low intrinsic dimensionality of the data. Compressed sensing is one example of the exploitation of low-dimensionality to perform estimation and model selection in high-dimensional data analysis.

In many situations, it is possible to collect large amounts of data beforehand to construct an empirical prior distribution for a data set. In this talk, we examine the phenomenon of posterior concentration in high-dimensional data analysis, validating a framework for incorporating this kind of precise a priori information. In our work, we analyze under-sampled linear regression with a Gaussian noise model, and we show that a large class of priors admit a finite-sample posterior concentration bound (with explicit constants) around the true signal in this setting. While most compressed sensing theory shies away from incorporating more precise a priori information because of its focus on convex optimization, our results demonstrate that a priori information can be successfully incorporated in a full Bayesian analysis with similar theoretical guarantees. We discuss present and future directions for exploring and exploiting this framework.