The KrishnaCam Dataset

Krishna Kumar Singh, CMU
Kayvon Fatahalian, CMU
Alexei Efros, UC Berkeley

Dataset

KrishnaCam is a large (7.6 million frames, 70 hours) egocentric video stream that spans nine-months of the life of a single computer vision graduate student. The dataset was recorded using Google glass, and contains 30 fps, 720p video, but no audio. In addition to video data, KrishnaCam includes GPS position of the camera, as well as acceleration and orientation of the body of the camera wearer (not the head). All recording was performed in outdoor, public areas or in the camera-wearer's home.

Final cleanup in progress. Instructions for Accessing the KrishnaCam dataset will be posted soon.

Paper

KrishnaCam: Using a Longitudinal, Single-Person, Egocentric Dataset for Scene Understanding Tasks
K. Singh, K. Fatahalian, A. Efros
WACV 2016

Abstract
We record, and analyze, and present to the community, KrishnaCam, a large (7.6 million frames, 70 hours) egocentric video stream along with GPS position, acceleration and body orientation data spanning nine months of the life of a computer vision graduate student. We explore and exploit the inherent redundancies in this rich visual data stream to answer simple scene understanding questions such as: How much novel visual information does the student see each day? Given a single egocentric photograph of a scene, can we predict where the student might walk next? We find that given our large video database, simple, nearest-neighbor methods are surprisingly adept baselines for these tasks, even in scenes and scenarios where the camera wearer has never been before. For example, we demonstrate the ability to predict the near-future trajectory of the student in broad set of outdoor situations that includes following sidewalks, stopping to wait for a bus, taking a daily path to work, and the lack of movement while eating food.