Gesture-based input for 3D

Engadget has a post on iPoint 3D brings gesture-based inputs to 3D displays with an example pic straight of an Autodesk demo. The pic aside I expect the technology they are discussing is fairly similar to what Microsoft uses for the surface (without having to actually touch the panel) and has been shown off a few times by CompSci students working on class projects with a simple web cam. This isn’t to downplay the potential of these interfaces, on the contrary I want to know when they will be stable enough to include in the OS (with Windows and Mac releasing touch in their next releases). These gesture based interfaces aren’t for every application, but tie it to facial recognition and you can have a multiuser interface where it can track each users ‘settings’. Link it to a 3D representation of a landscape and imagine being able to SuperMan your way through the scene with the traditional one hand flying pose. What if we could tie in sign language gestures to the interface to take advantage of an existing standard and teach the rest of us a few words at the same time.

So come on iPoint and others, let’s see what happens when we step away from the monitor with our gestures!

Share:

Written by

Jesse is Instructor in Geography and a PhD candidate in Geography focusing on the integration of phenomenology and geospatial technologies to study prehistoric cultural landscape. He is a GIS Professional and Registered Professional Archaeologist and holds an MA in Geography and a BS in Anthropology with a concentration in archaeology.