Manufacturers are adding intelligent capabilities (e.g. voice assistants, gesture sensing, facial recognition) to home devices at a rapid pace, and this is leading to an explosion of data generated at the edge. Traditional wisdom calls for offloading data to the cloud for further processing as local devices only have limited computational resources. However, we argue that when we consider the aggregate processing capability of all devices in the home, there is an opportunity for processing data inside the home. This has the potential to offer users with stronger privacy guarantees and potentially lower latencies. In this paper, we present a performance comparison between the capabilities of mobile phones and new hardware designed for Deep Learning Inference - the Coral TPU and the NVIDIA Jetson Nano. We also describe a new distributed inference system, named DeepHome, that can distribute the machine learning inference tasks to multiple heterogeneous devices in the home. We discuss various issues related to doing processing in an in-home context and present initial performance results from our working system.