Lost in the Virtual Fog – …And a Lean to the Right

lvfJan2013image2In the first half of my column a while back on the changing ways in which we interact with our computers, I focused on touch and its increasing adoption as an interface of choice, driven by the rapidly growing use of smartphones and other mobile devices. But as computers and digital information weave themselves deeper into the fabric of our daily lives, there is growing interest in new ways to interact with them, and an equally growing number of research projects, prototypes and even consumer products that are focused on making these interfaces a reality.

Microsoft’s Kinect is a great example of a new user interface device that is picking up traction, first in the gaming console space, but increasingly getting attention in the broader computing world. The Kinect interface is based on tracking the user’s body and movements, which Microsoft refers to as a Natural User Interface (NUI) or Human Interface. So far, there have been a number of commercial and indie video games that have utilized the Kinect, with varying degrees of success, but the real excitement, I think, with the Kinect is that Microsoft is not only allowing, but encouraging, the community of users and developers to come up with new ways of using the Kinect to interact with the computer by offering an API to allow anyone to develop an application that uses the Kinect sensor. I’ve begun working with the Kinect API myself, and can already envision lots of ways that the interface could be used for virtual world navigation and interaction.

At this year’s recent E3 Conference, Microsoft gave more detail on their vision for how our human-computer interaction mechanisms will evolve with their presentation on Project SmartGlass, which includes Kinect navigation for Xbox Live entertainment in addition to the use of the Kinect as a game controller. Now, you don’t need to hold an interface device, like a controller or mouse, in your hand, and your body itself becomes the mechanism for you to interact with your computer. Now you can use hand motions, head movements, and yes, even a lean to the right or left, to execute commands on your computer.

In addition, some might not know that the Kinect offers audio input capabilities as well, meaning that you can create a custom voice commands for interacting with your computer, your game console, or really any other computing device. This can be a powerful interface combo, as you can use voice commands to navigate menus, for example, and combine that with gestures or tracking hand movements to create a complex navigation scheme for a wide range of software applications.

There are limitations, of course, including the sensitivity of interface devices like the Kinect. How does it know whether I want to lean forward to move my avatar or whether I just sneezed and fell forward? There is still a lot of work to be done in the areas of ergonomics, gesture libraries, and other technical issues that can negatively affect a user’s experience with natural interfaces and lead to their rejection as a preferred alternative to the comforting familiarity of the keyboard and mouse. And audio input can suffer from similar issues of precision, as a multitude of users have experienced with that iPhone-encased voice vixen Siri.

And then there’s the issue of how to change people’s perceptions and habits, especially in the working world. Kinect for Windows, a Kinect device which is calibrated to allow the NUI to work for desktop computing applications, is designed to encourage developers and users to think about incorporating gesture-based interfaces and audio into everyday software applications, including productivity software like word processing programs and data and spreadsheet tools. But the complex software interfaces full of menus, buttons, and text will be a tough nut to crack for broad adoption. But, think of all of the advantages if it can. With all of those studies that suggest spending our days sitting down in front of a computer is shortening our lifespans, think about how a natural user interface could get us up on our feet and make collaborating around a computer or working together with our mobile devices projecting on to all kinds of surfaces. Minority Report, Total Recall, you name it, we’re seeing the tech in prototypes from projects all over the world.

Why should we in the geography and geospatial community be interested in what’s going on with these new technologies in human-computer interaction? On a general level, there is an inexorable movement toward the demise of the mouse and keyboard as the monopolistic gateways to our relationship with our computer. As a familiar analogue to the typewriter, the keyboard has served us well as we have grown up and into our professional lives in front of a computer. But its analog physicality is now limiting in a world where there’s a race to build and distribute faster, lighter, thinner, smaller, more powerful computing devices that can travel with us anywhere. It’s also limiting when new interfaces like an NUI mean that we could gather around our monitor or projection screen with our GIS open and have discussions, with multiple users moving the map view around, adding and removing layers, performing analyses, etc., all while comfortably interacting and not tied to a physical input device. With that visual of the geospatial collaboration lab I now want to build for my teaching and research, I’ll sign off for now and wander off into the virtual fog until next time.

Recent Mapstodon posts

Loading Mastodon feed...

Archives

Categories