Today at lunch we were kicking around some ideas on connecting various content with other content or to a location in a user friendly way and I couldn’t get past the thought of a touch interface. The problem is that we are still tied predominantly to a few things in the tech arena that makes this a limiting factor for deployment. Some of the main issues are interface, hardware, and SDK/API.
Interface is fairly obvious since touch apps are fairly new on the scene, especially well designed interfaces. Apple iPhone OS has the ball rolling well on the mobile side with Android looking to catch up, but Windows 7 is the only route to go right now for a broad audience app for laptops or desktops, though there are some Linux options if you aren’t worried about wide distro. Either way, the Win7 and Linux interfaces aren’t necessarily the most robust at this point since developers are just beginning to take advantage of what is available in the OS. On the upside it means we don’t have to wait for devs to roll out a driver and app compatibility for their device since they can probably use OS native libraries, but now we have to wait for software manufacturers to roll compatibility into their software for more than mouse input.
On the hardware side, especially the desktop (aka large[r] screens), there is still a long time to wait for device manufacturers to add a touch screen across their product lines. Cell phones are there, the upcoming tablets look promising, but if you want more than 2 people to look at your screen at once you have to jump-up to research $$ to afford a device or expect folks to Make a multiuser, larger display themselves. We have really liked some of the interfaces like the MS Surface, Diamond Touch, or TouchTable we have seen at various conferences, but they have the same limitations as something like a VR Cave or other stereo viewing device…they are meant to stay in one place so that people have to come to them.
Looping back around to developers and native OS libraries…YAY! It is phenomenal that we have access to hardware and SDKs that allow us to create whatever touch apps our imaginations can think up. There are limitations in the software SDKs available in GIS applications. Just as above, we are limited to the input methods that applications and OSes allow. We have applications from Autodesk that support Windows 7 multitouch (Mudbox for example) and ESRI is demoing an iPhone app, but will we be able to build apps with these libraries or will we have to recreate the functionality of these types of apps? It is a question that we will soon see the answers to as more users acquire touch capable hardware which will encourage developers to create for them in ever more interesting ways.