Oculus, Kinect and Extended Reality
What happens when we create a virtual boundary that combines two levels of perceptual reality? What happens when we create a virtual boundary through which we can view a window into reality?
These were the questions Jason Levine and Noah Pivnick addressed in their experiments mashing the Kinect depth sensor with the Oculus Rift.
This team searched for the limitations and sweet spots. They explored innovative ideas such as offering ‘reverse views’ of the world, and making subtle shifts in the representations of the world experienced by the viewer.
The challenge was how to keep the viewer from experiencing a common side effect of The Rift, nausea. Fortunately, the team succeeded.
Technical Challenges
The team created the mashup in the OpenFrameworks creative coding environment, which already had addons available for both Kinect and Oculus. The main technical challenge was making the two devices work together.
The hardware failed in week one, as a new power supply was needed for the Oculus, illustrating the types of the real problems encountered when working with engineering and electronics.
The team tried using the Kinect with mirrors and sunlight, but these light sources interfered with its depth sensing capabilities. However, a particular ‘Kinect aesthetic’ is apparent from the incompleteness of the depth and camera data.