360° Video Viewing Dataset in Head-Mounted Virtual Reality


360° videos and Head-Mounted Displays (HMDs) are getting increasingly popular. However, streaming 360° videos to HMDs is challenging. This is because only video content in viewers' Field-of-Views (FoVs) is rendered, and thus sending complete 360° videos wastes resources, including network bandwidth, storage space, and processing power. Optimizing the 360° video streaming to HMDs is, however, highly data and viewer dependent, and thus dictates real datasets. However, to our best knowledge, such datasets are not available in the literature. In this paper, we present our datasets of both content data (such as image saliency maps and motion maps derived from 360° videos) and sensor data (such as viewer head positions and orientations derived from HMD sensors). We put extra efforts to align the content and sensor data using the timestamps in the raw log files. The resulting datasets can be used by researchers, engineers, and hobbyists to either optimize existing 360° video streaming applications (like rate-distortion optimization) and novel applications (like crowd-driven camera movements). We believe that our dataset will stimulate more research activities along this exciting new research direction.


Wen-Chih Lo, Ching-Ling Fan, Jean Lee, Chun-Ying Huang, Kuan-Ta Chen, and Cheng-Hsin Hsu, "360° Video Viewing Dataset in Head-Mounted Virtual Reality," in Proceedings of ACM MMSys, June 2017


@inproceedings{lo17:vr360dataset, author = {Lo, Wen-Chih and Fan, Ching-Ling and Lee, Jean and Huang, Chun-Ying and Chen, Kuan-Ta and Hsu, Cheng-Hsin}, title = {360° Video Viewing Dataset in Head-Mounted Virtual Reality}, booktitle = {Proceedings of ACM Multimedia Systems 2017}, pages = {211--216}, year = {2017} }