Lost in the Virtual Fog – …And a Lean to the Right

lvfJan2013image2In the first half of my column a while back on the changing ways in which we interact with our computers, I focused on touch and its increasing adoption as an interface of choice, driven by the rapidly growing use of smartphones and other mobile devices. But as computers and digital information weave themselves deeper into the fabric of our daily lives, there is growing interest in new ways to interact with them, and an equally growing number of research projects, prototypes and even consumer products that are focused on making these interfaces a reality.

Microsoft’s Kinect is a great example of a new user interface device that is picking up traction, first in the gaming console space, but increasingly getting attention in the broader computing world. The Kinect interface is based on tracking the user’s body and movements, which Microsoft refers to as a Natural User Interface (NUI) or Human Interface. So far, there have been a number of commercial and indie video games that have utilized the Kinect, with varying degrees of success, but the real excitement, I think, with the Kinect is that Microsoft is not only allowing, but encouraging, the community of users and developers to come up with new ways of using the Kinect to interact with the computer by offering an API to allow anyone to develop an application that uses the Kinect sensor. I’ve begun working with the Kinect API myself, and can already envision lots of ways that the interface could be used for virtual world navigation and interaction.

At this year’s recent E3 Conference, Microsoft gave more detail on their vision for how our human-computer interaction mechanisms will evolve with their presentation on Project SmartGlass, which includes Kinect navigation for Xbox Live entertainment in addition to the use of the Kinect as a game controller. Now, you don’t need to hold an interface device, like a controller or mouse, in your hand, and your body itself becomes the mechanism for you to interact with your computer. Now you can use hand motions, head movements, and yes, even a lean to the right or left, to execute commands on your computer.

In addition, some might not know that the Kinect offers audio input capabilities as well, meaning that you can create a custom voice commands for interacting with your computer, your game console, or really any other computing device. This can be a powerful interface combo, as you can use voice commands to navigate menus, for example, and combine that with gestures or tracking hand movements to create a complex navigation scheme for a wide range of software applications.

There are limitations, of course, including the sensitivity of interface devices like the Kinect. How does it know whether I want to lean forward to move my avatar or whether I just sneezed and fell forward? There is still a lot of work to be done in the areas of ergonomics, gesture libraries, and other technical issues that can negatively affect a user’s experience with natural interfaces and lead to their rejection as a preferred alternative to the comforting familiarity of the keyboard and mouse. And audio input can suffer from similar issues of precision, as a multitude of users have experienced with that iPhone-encased voice vixen Siri.

And then there’s the issue of how to change people’s perceptions and habits, especially in the working world. Kinect for Windows, a Kinect device which is calibrated to allow the NUI to work for desktop computing applications, is designed to encourage developers and users to think about incorporating gesture-based interfaces and audio into everyday software applications, including productivity software like word processing programs and data and spreadsheet tools. But the complex software interfaces full of menus, buttons, and text will be a tough nut to crack for broad adoption. But, think of all of the advantages if it can. With all of those studies that suggest spending our days sitting down in front of a computer is shortening our lifespans, think about how a natural user interface could get us up on our feet and make collaborating around a computer or working together with our mobile devices projecting on to all kinds of surfaces. Minority Report, Total Recall, you name it, we’re seeing the tech in prototypes from projects all over the world.

Why should we in the geography and geospatial community be interested in what’s going on with these new technologies in human-computer interaction? On a general level, there is an inexorable movement toward the demise of the mouse and keyboard as the monopolistic gateways to our relationship with our computer. As a familiar analogue to the typewriter, the keyboard has served us well as we have grown up and into our professional lives in front of a computer. But its analog physicality is now limiting in a world where there’s a race to build and distribute faster, lighter, thinner, smaller, more powerful computing devices that can travel with us anywhere. It’s also limiting when new interfaces like an NUI mean that we could gather around our monitor or projection screen with our GIS open and have discussions, with multiple users moving the map view around, adding and removing layers, performing analyses, etc., all while comfortably interacting and not tied to a physical input device. With that visual of the geospatial collaboration lab I now want to build for my teaching and research, I’ll sign off for now and wander off into the virtual fog until next time.

Lost in the Virtual Fog – Just a Swipe to the Left

It’s been quite a while since my last column in this series, and a lot has happened in the geospatial world and the world of computing in general. I hope to give my thoughts on some of these trends over the next few months as I catch up to the world around me after finally finishing my PhD. One of the trends that I have been following with a lot of interest is definitely the move toward new ways of interacting with our computers.

To kick things off, I wanted to talk a little bit about what’s been going on with the growing presence of touch interfaces. While the keyboard and mouse still reign supreme in desktop computing, the success of the iPad and other tablets, as well as smartphones, has definitely broadened the reach of touch as a user interface. And that is filtering its way back into the desktop computing space, with the rise in popularity of all-in-one computers with touch-enabled monitors. In fact, I am writing this post on one of those new touch all-in-ones, the Lenovo A720.

I am finding my own computing behavior changing as well. For my day-to-day work and web surfing, I rely on my Windows slate tablet (ASUS Eee Slate EP121), which is touch-enabled as well. Most of the devices I interact with on a daily basis are touch interfaces and I’ve become so used to it that I often find myself touching a laptop or desktop monitor now, and wondering why nothing is happening.

So why is touch such a big deal? Because more and more of the devices that we either use now or are going to use in the near future (think smartphones and tablets) rely on a touch interface, and that means that software applications, even expert software like GIS or 3D virtual landscapes, will need to be touch-compatible if they are going to make the transition to new hardware platforms. Even more importantly, software users who haven’t worked with touch are probably going to have to come to grips with this new interface style. Some people take to touch quite quickly and intuitively, while others are going to struggle a bit.

On the developer side, writing applications with touch capabilities presents challenges, such as precision with finger movements and touch pressure and creating meaningful gestures for complex commands. When you’re talking about a complex series of tasks like working with map layers in a GIS, for instance, it can get a bit tricky. Still, we’ve already got the example of ArcGIS for iOS which is available on the small iPhone screen and the larger iPad and works quite well with a number of touch gestures. However, it’s quite a leap from the streamlined lightweight mobile apps to a full-fledged desktop GIS software package, which might require a real rethinking of how users interact with the various modules, viewers and tools to get a satisfying touch interface working.

So, even if you’re not a user of a touch device now, you may find that changing in the near future. Computing platforms and interfaces are changing whether everyone likes it or not and, while I don’t think keyboards and mice are going away anytime soon, in the world of technology they’ve been around for ages and may find themselves going the way of the punch card.

First Annual Sketch Up Halloween Challenge

Sketch Up has announced their first annual Halloween Challenge. You can pick three categories: 1:Jack 0’Lantern, 2: Haunted House, 3: Both together. You need to fill out a challenge submission form and upload your model to 3D Warehouse in publicly downloadable format. The SketchUp team will judge the entries on October 28th. Here is a link to Googlemeister’s Amazing Haunted House Walk Through Collection from last year in 3D Modeler.

If you need some inspiration, here is a link to the Haunted House (real and fictional locations), traditional Halloween, Halloweentown type locations, and popular Halloween episodes in TV Tropes.

Lost in the Virtual Fog – A Question of Scale

bigMan1I have been remiss in not doing any diary entries for awhile, but I have been feverishly working trying to get my demo XNA application ready for the ESRI UC presentation. Finally today, I think I got the last bit of functionality on my list working, so I am pretty excited and crossing my fingers that everything will run right at the conference, Of course, you can never predict a live demo, so tonight I am recording a few videos of Virtual Morgantown in action, using a cool little software tool called GameCam.

Last month, we were happy to be able to go to Pittsburgh to cover the Game Education Summit, held at Carnegie Mellon’s Entertainment Technology Center. We got some nice interviews, including the conversation with ETC Pittsburgh Director Drew Davidson, which we featured on Episode 206 of the podcast. While I’m going to have another entry soon that will be specifically about some of my thoughts on the Game Education Summit, since I’ve gotten back, I’ve had to literally burn the midnight oil to get Virtual Morgantown looking and running the way I want it for this stage in the project. As I’ve been sitting here opening each model in SketchUp, cleaning up what I can, and exporting them to the XNA application as .X files (many, many thanks for Zbyl’s .X Exporter plugin!), I am continually reminded of the challenges in working at this scale after coming from a GIS background.

As you can zoom in and essentially immerse yourself at a nearly 1:1 scale in the virtual world, issues that never would have mattered suddenly become vital. Even Google Earth, World Wind, ArcGIS Explorer, any of the virtual globes aren’t really meant to be used at that scale, as their background imagery and 3D models look best from a viewpoint well above ground level. So, when you get down to the ground, and are actually representing real features, you have to give each one at least some individual attention. It’s often the opposite of the way most of us are trained. Rather than looking for commonalities and creating data layers that characterize those similarities, you have to bring out the aspects that might make a particular feature unique.

It’s a strange notion for a lot of geographers and GIS people, I think, to change their perspective from starting with a zoomed-out view of the world and then drilling down toward individual features to starting with the viewpoint of a single person in the world and then have to move and explore in order to identify and understand the nature of the virtual environment you’re immersed in. And, the more you’re drawn into the virtual world, the more obvious the individual differences become, and the more important it is for the creator of the simulation or interactive environment to pay attention to those small design details that help form a sense of actually experiencing the virtual world.

As I have progressed from childlike wonder and delight over my ability to create a simple XNA application with real-world terrain data, to relief when each one of my new functions actually builds and runs or when I get my model assets adjusted to just the right location and height, I am becoming even more of a believer in taking gaming technology and design seriously, and looking at how we can create virtual world applications that integrate aspects from many different areas, from gaming to GIS and geospatial to geography, and even history and other disciplines.

Lost in the Virtual Fog – making the interactive connection, part 1

keyhole1My dissertation proposal defense is finally over and, since I passed, it’s time to get back to work and really get the functionality on the Spatial Experience Engine ramped up. During my presentation and in the Q-and-A with my committee afterward, I kept coming back to the issue of interaction in the virtual world. I’ve been ruminating on this quite a bit over the last few months, once I got the basic terrain and model drawing stuff out of the way. I think the key to making all of this work and be compelling for users is of course the UI (User Interface).
Continue reading “Lost in the Virtual Fog – making the interactive connection, part 1”

Lost in the Virtual Fog – Seeing Really is Believing

digcity1So, this past week has been a flurry for me, as I’ve had to schedule and then reschedule my PhD dissertation proposal defense. But, finally it seems that we are set for next Tuesday morning, so now I have some time to catch up on my diary entry. I was contemplating what to write this week, when I was reminded this morning of something we’ve noticed since we started demonstrating our work in virtual worlds and especially the gaming technology: seeing really is believing.

We’ve been giving demos of our virtual world work in our VR CAVE this week, both the older ArcScene projects and the new XNA application, and I’ve been really happy with the response we’ve been getting from people in lots of different fields. One common theme with the visitors that I’ve talked to is that many of them have kind of heard about gaming technology, serious games, or virtual worlds, but they didn’t really get what people were talking about until they were able to see it actually working. Once they got over the initial WOW factor, it was fun to watch GIS professionals, city planners, and educators start brainstorming with ideas about these applications could be used for all kinds of projects. After all the frustrating hours spent hunched over a keyboard staring at a stupid 3D model and wondering why the roof isn’t square, or screaming at the computer because clearly it is incapable of understanding my perfectly written code, it is really nice to see that we are creating something that people appreciate and can see the value of.
Continue reading “Lost in the Virtual Fog – Seeing Really is Believing”