Real-time Virtual Reality Video

Virtual reality (VR) video applications couple complex camera systems and head-mounted devices with computational photography algorithms to generate immersive, panoramic video. Rendering 3D-360◦ VR video from a camera rig is computation-intensive and typically performed offline. Moreover, efficiently managing the resulting VR videos for low-bandwidth viewing is challenging.

Hardware-Friendly Bilateral Solving

We have designed a new algorithm, the hardware-friendly bilateral solver, that enables real-time 3D-360◦ VR video rendering. Our algorithm is easily parallelized, achieving a 4x speedup on CPU and 32x speedup on GPU over a baseline CPU implementation. We also designed an FPGA-based hardware accelerator that utilizes reduced-precision computation and the parallelism inherent in our algorithm to achieve further speedups over our CPU and GPU implementations while consuming an order of magnitude less power. The FPGA design’s power efficiency enables practical real-time VR video processing at the camera rig or in the cloud.


We are also designing a databse management system, VisualCloud, to efficiently ingest, store, and deliver VR content at scale. VisualCloud targets both live and prerecorded 360◦ VR videos. Content delivered through VisualCloud requires 40% less bandwidth than existing methods and scales to many concurrent connections.