hit
counter
VideoPipe: Building Video Stream Processing Pipelines at the Edge | Zhiming Hu's Homepage

VideoPipe: Building Video Stream Processing Pipelines at the Edge

Abstract

Real time video processing in the home, with the benefits of low latency and strong privacy guarantees, enables virtual reality (VR) applications, augmented reality (AR) applications and other next-gen interactive applications. However, processing video feeds with computationally expensive machine learning algorithms is impractical on a single device due to resource limitations. Fortunately, there are ubiquitous underutilized heterogeneous edge devices in the home. In this paper, we propose VideoPipe to bridge the gap and run flexible video processing pipelines on multiple devices. Towards this end, with inspirations from Function-as-a-Service (FaaS) architecture, we have unified the runtime environments of the edge devices. We do this by introducing modules, which are the basic units of a video processing pipeline and can be executed on any device. Moreover, as some devices support containers, we further design and implement stateless services for more computationally expensive tasks such as object detection, pose detection and image classification. As they are stateless, they can be shared across pipelines and can be scaled easily if necessary. Finally, with the uniform design of input and output interfaces, we can easily connect any of the edge devices to form a video processing pipeline. To evaluate the performance of our system, we design and implement a fitness application on three devices connected through Wi-Fi. We also implement a gesture-based Internet of Things (IoT) control application. Experimental results show that VideoPipe is quite promising for efficient video analytics on the edge.

Publication
Proc. of the International Middleware Conference 2019 - Industry Track.
Date