Leap through space
We were browsing through different types of media when we came across a 3D video display. There was no viable way for viewers to navigate through the 3D space of the video, or for an artist to show others the exact story they want to tell. In addition, despite these videos occupying a 3D space, users can only edit them in a 2D space due to the way the videos are flattened and the limitations put forth by our computers. This, along with the idea of combining traditional controllers (dial) and new controllers (motion sensor) gave us the idea for our hack.
What it does
Leap through Space is a 3D video controller that allows users to easily navigate through 360 videos using hand gestures and movements along with a Leap Motion and a Griffin Powermate. Spinning the powermate adjusts the rotation of the video sphere, while positioning your hand on different parts of the controller (RedBull) allows for rotation along different axes and controls the zoom.
How we built it
We used Leap Motion and Griffin for our input, Unity for gesture algorithms and collision detections, OSC to send data from Unity to Open Frameworks, and openFrameworks to map the flat 2D videos produced by a 360 camera onto a virtual 3D object.
Challenges we ran into
Because there were so many layers of transferring data, we had a difficult time testing and making sure things were compatible. We would have to completely sort out one bottleneck before we could see if another could work with it, which slowed us down.
Accomplishments that we’re proud of
Managing to get Leap Motion to send data across computers was a big win for us!
This project was done during the Stanford University Hackathon, Treehacks 2017 Hackathon.