A VerySpatial Podcast
Shownotes – Episode 368
August 5, 2012
Main Topic: Our conversation with Keith Masback, USGIF
Click for the detailed shownotes Continue reading
It’s been quite a while since my last column in this series, and a lot has happened in the geospatial world and the world of computing in general. I hope to give my thoughts on some of these trends over the next few months as I catch up to the world around me after finally finishing my PhD. One of the trends that I have been following with a lot of interest is definitely the move toward new ways of interacting with our computers.
To kick things off, I wanted to talk a little bit about what’s been going on with the growing presence of touch interfaces. While the keyboard and mouse still reign supreme in desktop computing, the success of the iPad and other tablets, as well as smartphones, has definitely broadened the reach of touch as a user interface. And that is filtering its way back into the desktop computing space, with the rise in popularity of all-in-one computers with touch-enabled monitors. In fact, I am writing this post on one of those new touch all-in-ones, the Lenovo A720.
I am finding my own computing behavior changing as well. For my day-to-day work and web surfing, I rely on my Windows slate tablet (ASUS Eee Slate EP121), which is touch-enabled as well. Most of the devices I interact with on a daily basis are touch interfaces and I’ve become so used to it that I often find myself touching a laptop or desktop monitor now, and wondering why nothing is happening.
So why is touch such a big deal? Because more and more of the devices that we either use now or are going to use in the near future (think smartphones and tablets) rely on a touch interface, and that means that software applications, even expert software like GIS or 3D virtual landscapes, will need to be touch-compatible if they are going to make the transition to new hardware platforms. Even more importantly, software users who haven’t worked with touch are probably going to have to come to grips with this new interface style. Some people take to touch quite quickly and intuitively, while others are going to struggle a bit.
On the developer side, writing applications with touch capabilities presents challenges, such as precision with finger movements and touch pressure and creating meaningful gestures for complex commands. When you’re talking about a complex series of tasks like working with map layers in a GIS, for instance, it can get a bit tricky. Still, we’ve already got the example of ArcGIS for iOS which is available on the small iPhone screen and the larger iPad and works quite well with a number of touch gestures. However, it’s quite a leap from the streamlined lightweight mobile apps to a full-fledged desktop GIS software package, which might require a real rethinking of how users interact with the various modules, viewers and tools to get a satisfying touch interface working.
So, even if you’re not a user of a touch device now, you may find that changing in the near future. Computing platforms and interfaces are changing whether everyone likes it or not and, while I don’t think keyboards and mice are going away anytime soon, in the world of technology they’ve been around for ages and may find themselves going the way of the punch card.
First, no geobloggers tables so typing here is going to be a major, major pain. So please excuse the brevity, typos and other errors. Second, the wifi is as wonkey as you’d expect when 15k geo nerds hit it at the same time. They’ve dimmed the lights and we’re off with the customary intro video. Jack takes the stage. Giving us our welcome and telling us to say howdy do to the neighbors. As normal, I’m typing this instead :). So virtual howdy do to those reading now.
I just jury rigged a ‘desktop’ so I can type. It’s almost comical. All I can say is thank god Barb wear’s scarfs.
Jack is back chatting. He’s talking a bit about what types of things we’re doing. Its the normal jury list of things we do – transportation, environmental conservation, energy exploration, etc. Fun fact, for the first time ever, the Michelen Maps are done by ArcMap. The Mars lander is using ESRI products to help find its way around (“God I hope they got the projection right” – Jack). He’s talking about GIS infrastructure.
We’re seeing the 2012 SAG winners – congrats to them all. Big round of applause! A special award is going for the Trust for Public Lands. They want to protect public lands so people have parks to enjoy. I’m assuming their GIS staff basically keeps track of that land. They’re shooting for everyone to be within a 10 minute walk of a park or other outdoor recreation space. I’m sure that was a bit of hyperbole, but good luck to them! Next the US EPA is getting a special award for their work. The EPA employees are standing for applause. That was kinda cool of him doing that. Jack picked EPA because they’ve worked hard to integrate science into public policy using GIS as the platform.
A year after the introductory post to my “All tied up” series I am actually releasing my next post. In the intervening year terms have, if anything, become even more interwoven with many of us often going to the now de facto ‘geospatial technologies’ to explain the wealth of technologies and data that we pull out of the toolbox and database for any given project. The term that has most been hidden by this (in my opinion, with no easy way to back it up) is Remote Sensing. By Remote Sensing I refer to what Lillesand and Keifer define as “The science and art of obtaining information…acquired by a device that is not in contact with the object…”.
This is a very broad definition and it captures all of the ways in which remotely sensed information is captured, but here I will narrow it down to those raster-based data (and occasionally point cloud data) that are captured from a distance. We can easily include photogrammetry (planes, balloons, etc) and satellite remote sensing capturing everything from panchromatic to hyperspectral images.
While we are on ‘what it is’ I will include what may get lumped in occasionally. Remote sensing is not all sensor data from remote locations. While the term is not incorrectly used, it is not always the same since some of these sensors are in direct contact with what they are measuring (stream gauges, temperature sensors, etc). So in a Venn diagram there is a large overlap between sensors located remotely and remote sensing instruments, but they are not completely overlapping sets. Kind of an aside, but I wanted to make a Venn diagram.
Getting back to remote sensing, there are two ways to look at the term. One is that it isn’t so much tied up, but largely absent in the industry today. In many areas, imagery has become the term of choice and, of course, the backdrop in our web maps, cartographic products, etc. In these projects and products we talk about imagery, but its source has become an almost unimportant aspect of some work. The other way we look at remote sensing is definitely one that is tied up in GIS. Many, many moons ago you had raster software and vector software and much of that raster geospatial software was driven by remote sensing activities, but that has changed (I think we can agree, for the better) in the GIS space as vector and raster has come together. As discussions with some of the leading remote sensing software vendors on the podcast have shown they are inline with, or even making tools available directly within, GIS software packages. We have lost the divide between GIS and Remote Sensing which having to switch between applications gave us. The separation continues to fade in terms of the software arena.
What does standout in terms of Remote Sensing, not tangled up if you will, is the hardware used to capture imagery (from satellite to helicopter to kite to drone…) and the data itself. This content continues to push our industry forward as we can collect both broad swath information that pushes science forward (e.g. moderate resolution data) and we continue to create sensors with ever-finer resolution and higher accuracy and precision (e.g. lidar). These data, as mentioned, are seen now more than ever with web maps and virtual globes, but it is the analytical potential that they offer, whether resulting in time series, human/environment processes, or finding archaeological sites, that are the strength of our investment in remote sensing platforms and data.
We will cut the strings there, as raster analysis is another set of terms that have been tied together as well. But keep in mind as you are doing your research or projects that your imagery is the result of over a century of research in capturing and manipulating images from a distance. While we have begun to take the technologies for granted in some cases, remote sensing remains an integral part of many areas of the industry.
Like all of us, I’m a creature of habit. I start my day off with the obligatory gallon and a half of coffee and my normal web rounds to see what’s new since I signed off the night before. One of my favorite places on the web to hit is Ikea Hackers. I love the idea that people look at these pre-built objects not as end items, but as things that can be manipulated, moved, altered, added to, and… well… ‘hacked’ into new versions. I love to study the hacks, see if I can emulate them, see if I can extend them. I even start to look at individual hacks and see if I can hack a couple hacks together. It’s like a grown up version of Lego. The pictures on the website are like the pictures on the boxes of Lego – a suggestion of where to move forward. It just thrills me to no end.
Ikea hackers works because Ikea exists. I know that’s simplistic, but it has some serious implications. Someone has gone through the hassles and problem of making things that fit together in different ways. They figured out how those things can fit together. They made (technical terms alert!) the doohickeys that make the thingies fit into the what-da-ya-call-ems. Those things just work. An allen wrench, a screwdriver, and a few off color words and you can have a bookcase or even a bed. We have this base of objects that are designed specifically to work together in very specific and defined ways. Hacking those things becomes so much easier because it’s left to the hacker to envision ways in which these things that are designed to fit together separately, can be fit together. The hacker is effectively designing new interfaces to things that already have some well defined interfaces. On top of that, they throw in an aesthetic change that can ultimately change the whole product from top to bottom… transforming the ‘hack’ into a whole ‘nother critter.
So what does hacking Ikea furniture have to do with geography and geospatial technology? A lot, I think, specifically as it applies to newer forms of representation such as virtual reality, or serious games, or whatever term you like here*. We can think of the elements in Ikea as a raw product that can be adapted, combined, reconfigured, changed, or removed as necessary for a specific outcome. It isn’t left to hackers of Ikea furniture to create the raw products – Ikea has already done that for them. Nobody goes out to a saw mill, grabs some saw dust, glues it together under pressure, slaps some white scratch resistant sheets over that new pressboard, then drills holes to hold these metal connectors they hand forged with allen heads in them so the boards can fit together. Those already exist at Ikea, so why would you?
Unfortunately I think in the virtual universe, we’re still stuck at the raw materials stage instead of the raw products stage. We have to go out and make our virtual worlds from scratch – every line, every polygon, every bit of physics, nearly every bit of texture needs to be hand created. That puts a LOT of constraint on the uptick in the virtual, I think. Some of us simply don’t have the artistic chops to put this stuff together, and even those who do often don’t have the programming chops to build the world once the models are made. Sure, we can collaborate to get the skills we’re missing, but that takes a shared space to interact and a shared objective. I can program and want to study World War I trenches. You can build models and graphics, but you’re interested in religion in early America. Let’s call the whole thing off.
Admittedly there has been some movement toward making the raw products. Google just sold their 3D modeling software to Trimble, and Adobe and Autodesk maintain applications, for instance. The problem with these products is they focus more upon the model and less upon the process. That’s great for artistically declined people like myself, but not so great for the programmatically challenged. The methods and the process are missing. Then again, even if the model exists, it might not be malleable, either because of ability, license, or source material. To turn back to my Ikea analogy, I can set a bookcase on top of a table, but that’s not the same thing as ‘hacking’ the two together, now is it? For the hacking culture to spark, grow, and expand, there needs to be something to ‘hack’, not this nebulous mass of stuff we have to work into something usable.
How do we get there? I have no idea. Does there need to be an accessible corporate vehicle that encourages this sort of hacking, ie ‘VR Ikea’? Does it have to come organically from the community? Is it the intersection of the two? Where does the spark that kicks this off come from? The current attempts at answering these questions kinda feel like old carburetor cars that would get flooded when you try to start them. We’re kinda flooded right now in the move from creating everything from scratch to ‘hacking’. I can kinda see bits and pieces of the path from flooded to fully running and it excites me. I desperately want to go into a VR Ikea and grab this model and that model and this physics approach and hack something new and innovative and interesting. I can taste it. Then again, it could just be those Swedish meatballs I’m jonesing for… who knows?
*Jesse note: I will tackle these terms at the beginning of August