Neural networks are capable of learning complicated mathematical relationships and quickly computing approximations of computationally intensive physical models. The choice of topology and preprocessing are important to neural network learning. In this talk, I will describe a building-block-based framework for understanding the range of functions that a neural network can approximate given a topology. I will also show how well-chosen preprocessing methods can reduce the complexity needed to solve a problem. As examples, I will present some problems involving functions of circular data (e.g. time and geolocation).