High-resolution population density maps are a critical component for global development efforts, including service delivery, resource allocation, and disaster response. Traditional population density efforts are predominantly survey driven, which are laborious, prohibitively expensive, infrequently updated, and inaccurate - especially in remote areas. Furthermore, these maps are developed on a regionalbasis where the methods used vary region to region, hence introducing notable spatio-temporal heterogeneity and bias. The advent of global-scale satellite imagery provides us with an unprecedented opportunity to create inexpensive, accurate, homogeneous, and rapidly updated population maps. To fulfill this vision, we must overcome both infrastructure and methodological obstacles. We propose a convolutional neural network approach that addresses some of the methodological challenges, while employing a publicly available, albeit low resolution, remote sensed product. The method converts satellite images into population density estimates. To explore both the accuracy and generalizability of our approach, we train our neural network on Tanzanian imagery and test the model on Kenyan data. We show that our method is able to generalize to unseen data and we improve upon the current state of the art by 177 percent.