A VerySpatial Podcast
Shownotes – Episode 396
February 17, 2013
Main Topic: Ryan Draughn of NCLGISA
Click for the detailed shownotes Continue reading
For the last few years, I’ve been doing all my interactive mapping development using ESRI’s Flex API. If you know me really well, you’d realize that’s a pretty big deal. No, wait, that’s the mother of all big deals. Now I should give a little background – no less than five years ago, I once declared that anyone who develops in Flash should be punched in the head, kicked down a hill, and be required to write Assembly for the rest of their professional lives using nothing more than an ZX81 Sinclair Computer. You’d be hard pressed to find a bigger Flash hater than me. Yet a couple of years ago, I found myself turning to the thing I hated most to get my day to day job done. The transition wasn’t easy. I sneered and drug my heals the whole time I was learning it. I said all sorts of bad things about Adobe, about Adobe’s developers, about ESRI’s choices in business partners, even about myself. I took long showers in the morning to attempt to keep the ‘Flash stink’ off of me. I even started avoiding Jesse in the halls because he would just sigh sadly and shake his head (Sue was ok to talk to because she was doing C#, so she knew all about programming in languages nobody likes or uses :). Honestly, it was a mess.
Here’s the thing that really took me by surprise – I found out I kinda like Flex. Strike that – I actually really like Flex. Once I got used to its particularities, the code was actually fairly elegant and simple. Skinning is really cool. I love the theoretical ability to separate form from function. I say ‘theoretical’ because in practice I’ve noticed people tend to just bundle the two together. I like the mix of XML based approaches and actual scripting. The modularization is rather nice. The ability to ‘draw’ my components on the screen is unlike anything I’ve ever used before in web development. I really like the fact there are multiple ways to do things. If I can’t get the skinning to work, I can turn to CSS to get the job done. Don’t get me wrong, there’s a bunch that annoys me about Flex. For instance, datagrids just suck. They’re ugly and annoying and I hate them. And who’s bright idea was it make it impossible to access a pure database via Flex directly? I gotta drop to another language to pull it off? Annoying. But that’s not really the point of this piece. The point is that I found out Flex was actually really powerful and allowed me to quickly create Rich Internet Applications with little to no cross browser testing.
Allow me to digress for a minute and underscore that last point – little to no cross browser testing. Anyone who has developed for the web will tell you the biggest pain in the rear is getting it to work the same way on different browsers. For whatever reasons, the browser people can’t come together and agree on one rendering engine. Really, that’s all we ask as developers – make the thing work the same in all browsers. Is that too much to ask? Every minute of time I spend trying to get things to work the same in four different browsers makes me want to send a bill for my time to Microsoft, Google, Mozilla Foundation, and Apple. I’m doing their work and I’m not happy about it. But as I said, I digress.
Where am I going with all of this? The point I’m trying to make is that we developers have to remain flexible in our approaches. Right now, the whole area of Rich Internet Applications is in turmoil. Silverlight looks abandoned, Flex has been pretty much tossed aside by its creator, Adobe, and HTML 5 isn’t even technically approved as a standard until 2014. We are kind of caught jumping from one cliff to the other. A whole lot of people are talking about HTML 5 as the future, but we aren’t there right now. As of this writing, I have three browsers on my computer. When I do an HTML 5 test (http://html5test.com/), Chrome version 21 gets 437 points out of 500 for HTML 5 compatability. Firefox 15.0 gets 346 points. Internet Explorer 9 gets a depressing 138 points. You can easily see how developers are going to have to fall back into cross browser testing hell fairly quickly. That saps development time and resources that are in scarce supply. My ultimate point is that HTML 5 still isn’t prime time.
Muddying the waters even further is the whole issue of mobile. I’m not sure I’ve been in a development meeting anytime in the last year and mobile wasn’t brought up at least once. It’s out there and it’s something with which we have to contend, like it or not. Do you go HTML 5 on a mobile browser so it works everywhere (in theory)? Do you go native? Do you sink the resources to do Android AND iOS, never mind leaving people like Sue running Windows Phone out in the dark?
What they do is underscore the importance of responsiveness in current and future Internet Mapping projects, and by ‘responsive’, I’m referring to developers and developing environments. We have to remain ever responsive because the world in which we work is ever changing, perhaps more so than ever before. The skills we learn today may serve us little beyond the abstract tomorrow. And that’s pretty darn scary. But it’s also pretty exciting. It’s a lot of work keeping up, remaining flexible, and responding to changes. It’s also kinda rewarding.
In the first half of my column a while back on the changing ways in which we interact with our computers, I focused on touch and its increasing adoption as an interface of choice, driven by the rapidly growing use of smartphones and other mobile devices. But as computers and digital information weave themselves deeper into the fabric of our daily lives, there is growing interest in new ways to interact with them, and an equally growing number of research projects, prototypes and even consumer products that are focused on making these interfaces a reality.
Microsoft’s Kinect is a great example of a new user interface device that is picking up traction, first in the gaming console space, but increasingly getting attention in the broader computing world. The Kinect interface is based on tracking the user’s body and movements, which Microsoft refers to as a Natural User Interface (NUI) or Human Interface. So far, there have been a number of commercial and indie video games that have utilized the Kinect, with varying degrees of success, but the real excitement, I think, with the Kinect is that Microsoft is not only allowing, but encouraging, the community of users and developers to come up with new ways of using the Kinect to interact with the computer by offering an API to allow anyone to develop an application that uses the Kinect sensor. I’ve begun working with the Kinect API myself, and can already envision lots of ways that the interface could be used for virtual world navigation and interaction.
At this year’s recent E3 Conference, Microsoft gave more detail on their vision for how our human-computer interaction mechanisms will evolve with their presentation on Project SmartGlass, which includes Kinect navigation for Xbox Live entertainment in addition to the use of the Kinect as a game controller. Now, you don’t need to hold an interface device, like a controller or mouse, in your hand, and your body itself becomes the mechanism for you to interact with your computer. Now you can use hand motions, head movements, and yes, even a lean to the right or left, to execute commands on your computer.
In addition, some might not know that the Kinect offers audio input capabilities as well, meaning that you can create a custom voice commands for interacting with your computer, your game console, or really any other computing device. This can be a powerful interface combo, as you can use voice commands to navigate menus, for example, and combine that with gestures or tracking hand movements to create a complex navigation scheme for a wide range of software applications.
There are limitations, of course, including the sensitivity of interface devices like the Kinect. How does it know whether I want to lean forward to move my avatar or whether I just sneezed and fell forward? There is still a lot of work to be done in the areas of ergonomics, gesture libraries, and other technical issues that can negatively affect a user’s experience with natural interfaces and lead to their rejection as a preferred alternative to the comforting familiarity of the keyboard and mouse. And audio input can suffer from similar issues of precision, as a multitude of users have experienced with that iPhone-encased voice vixen Siri.
And then there’s the issue of how to change people’s perceptions and habits, especially in the working world. Kinect for Windows, a Kinect device which is calibrated to allow the NUI to work for desktop computing applications, is designed to encourage developers and users to think about incorporating gesture-based interfaces and audio into everyday software applications, including productivity software like word processing programs and data and spreadsheet tools. But the complex software interfaces full of menus, buttons, and text will be a tough nut to crack for broad adoption. But, think of all of the advantages if it can. With all of those studies that suggest spending our days sitting down in front of a computer is shortening our lifespans, think about how a natural user interface could get us up on our feet and make collaborating around a computer or working together with our mobile devices projecting on to all kinds of surfaces. Minority Report, Total Recall, you name it, we’re seeing the tech in prototypes from projects all over the world.
Why should we in the geography and geospatial community be interested in what’s going on with these new technologies in human-computer interaction? On a general level, there is an inexorable movement toward the demise of the mouse and keyboard as the monopolistic gateways to our relationship with our computer. As a familiar analogue to the typewriter, the keyboard has served us well as we have grown up and into our professional lives in front of a computer. But its analog physicality is now limiting in a world where there’s a race to build and distribute faster, lighter, thinner, smaller, more powerful computing devices that can travel with us anywhere. It’s also limiting when new interfaces like an NUI mean that we could gather around our monitor or projection screen with our GIS open and have discussions, with multiple users moving the map view around, adding and removing layers, performing analyses, etc., all while comfortably interacting and not tied to a physical input device. With that visual of the geospatial collaboration lab I now want to build for my teaching and research, I’ll sign off for now and wander off into the virtual fog until next time.
OK, this month I have reached the second (or fourth, depending on your reckoning) milestone in terms of birthdays. The first, of course, is 21, where in the US you are legal to do pretty much anything except rent a car. The other two, if you are using the looser reckoning, are 16 (OK to drive on your own) and 18 (you are an “adult”).My preferred second milestone doesn’t really come with anything added to your capabilities, and I am pretty sure that for me it will be just any other day (I will be sure to tweet if something spectacular occurs). This month, I turn 40.
Normally, I wouldn’t talk about my age, I don’t even have my birthday listed on Facebook, but this year has me thinking. I am younger than Esri and Intergraph by a few years, older than Imagine and ENVI by the same number of years, and I am only a few months behind the launch of the Landsat program. And this is really where my forty years comes in. As I look at the Landsat program I can see the life moments that it has gone through: the starts and stops, the reimaginings, the times when perseverance was all that kept it going, and the times that everything came together.
Unlike most of us when we reach 40, however, the Landsat program can look back at its life so far and see that it has changed the way we look at the world. And with continued support beyond next month’s launch of the LDCM, we can hope that the program is nowhere near middle-aged (fingers crossed for no looming midlife crisis…for Landsat or me).
Looking back at the Landsat program, it began as most, a newborn that no one was sure where it would go. By the time it was three, it gained a new awareness of the world (Landsat 2 – 1975), but continued grow. Just after it turned five, Landsat showed its precociousness (Landsat 3 – 1978) with the addition (though short lived) of a thermal band. By the time the program turned ten, Landsat was ready to move to middle school (Landsat 4 – 1982). It kept many of its friends (MSS) from its previous school, but made new friends (TM) as well and was getting better at seeing what was around (30m max resolution). As junior high came around (yes, we used to have those in North Carolina at least) the program hit its stride. It wasn’t necessarily the most popular kid in school, but all of the geeks thought it was great (Landsat 5 – 1984) and was willing to work hard.
In the college years, Landsat had some setbacks, it tried to reach orbit, but it just didn’t happen (Landsat 6 – 1993). In fact, it was kind of shaken by this and didn’t try again for a few years (Landsat 7 – 1999). It made it this time, complete with new glasses (ETM+). It found its stride; it wasn’t just the geekiest of folks looking at it anymore, it began to make new friends and was always willing to share. The Landsat program had some problems with technology and financing like anyone in the 2000’s, but the program stayed its course and persevered, with its two trusty workhorses, Landsat 5 and Landsat 7. Right now, it is overcoming some issues, but looking at the potential for a great promotion this year.
In this description, I can see any number of people that I grew up with. The only difference is that most of my friends haven’t switched whisk-brooms for push-brooms, or even want to deal with brooms I would guess.
On a side note, I can imagine how excited and nervous the Landsat team is right now with less than a month to go before the long-awaited LDCM launch, and not much more before the highly anticipated first test images are released, and I hope they remember that they have the support of a community that continues to grow and relies on their great efforts and achievements.
I was exploring the 2013 International Consumer Electronics Association conference website and didn’t get further into it then the 2013 Best of Innovations Awards when nostalgia hit. The first category: Computer Hardware & Components lists Moneual’s Touch Table PC as a tool for restaurant patrons to order, entertain themselves, and pay their bill. As someone who grew up in an area surrounded by diners and hamburger joints (NJ is the diner capital of the world), I fondly remember Mr. Bee’s hamburgers with fake bee hives under glass tables and Ms. PacMan table top games. Attendees at the annual ESRI International Conference in San Diego will probably relate it to the fun themed table tops at The Cheese Shop Deli near the conference center. The concept of a touch table top for a diner is both innovative and comfortable at the same time, because it is built upon diner history for diner’s providing entertainment and often advertising space on table tops.
I started to reflect on how many of the innovations that the CES highlights this year, such as touch tables, tablets, and 3D visualization feel more comfortable in their environments than the technology that led up to them didn’t. Is one of the signs that a technology has matured that it blends in with its surroundings and daily life? According to the Wall Street Journal, restaurant chains are testing out using small, interactive computer screens at the table. They have found that diners tip more given electronic choices. Some restaurants are even making a profit on table top game apps. The arguments that the apps take away from the dining experience also feel nostalgic, they were the same arguments being used to argue against table top arcade games in pizzerias.
3D Visualization was another CES feature technology that had a nostalgic feel because of arguments being made outside CES cautioning against the detrimental impacts of 3D visualization, such as software creating external corporate control of cities, film making, or other entities that use the technology. While the technology is new, many of the critiques of it, even when valid, feel retread. The same issues were raised with the use of innovations like office software tools and technicolor movies. Other new technologies that had a nostalgic feel were touch tablets that can be extended to work with one or more monitors to create better workflow. I often use my iPad as an electronic document holder and find that many of the principles for using it are a carry over from word processors and type writer ergonomics. Of course, the most technologically nostalgic device is one like the USB typewriter that modernizes old technology. Even the argument against “retro technologies” that try to fit new technologies into old boxes as nostalgic wallowing applies to today’s Generation Flux and yesterday’s Lost Generation.
A recent article in The Economist delves into the feeling of technology nostalgia, “Has the ideas machine broken down?: The idea that innovation and new technology have stopped driving growth is getting increasing attention. But it is not well founded.” It discusses the fact that this overall feeling of stagnancy and incremental rates of innovation has been going on for decades. According to the article, this is partly a factor of perception, starting point, and growth rate. In today’s society, our technological imaginations are bigger than our own reality. This might explain why criticisms of new innovations feel so similar to arguments about older technologies. An User Interface Engineering article, “Designing What’s Never Been Done Before” sums it up by saying that we are often designing new solutions for old or existing problems. An Entrepreneur article, “The New Trends and Technologies Driving Design“, written almost a year ago in February 2012 states that incrementalism and nostalgia are a manufactured part of the design process or a built in constraint. We have to face facts, in the life cycle of the grand technological growth curve there isn’t much difference between a manual or electronic document holder.
I haven’t decided if this nostalgia reflects more on my age or the age we live in, but I’ve decided to take the approach of Terry Pratchett, “It’s still magic even if we know how it’s done” or even if it feels like we’ve been here before. I look forward to the new age of space exploration, which Wired magazine’s “Almost Being There: Why the Future of Space Exploration Is Not What You Think” is trying to tell me won’t live up to our imaginations because technology has transformed space exploration beyond our ken. We sociologically and psychologically feel uncomfortable with modern space travel because it exceeds mental images we have built as a society. Maybe it is good to have innovations that challenge us intellectually and technologically as a society. I don’t know if Marvin Minsky was partially joking when he chided NASA for being old fashioned and creating 10 year olds jaded of space technology, but I like to think so, because I can’t believe the magic has gone out of technology and innovation yet. Maybe the nostalgia and maturity I am referring to is actually ennui and jadedness that will be overcome when the next big leap happens to wow us in our lifetimes. I hope it happens soon.