Pins on the Map: George Washington Slept Here

As you shiver in the cold today during what The Weather Channel is predicting could be the coldest winter on record for decades in North America, reflect on the 1780 snowstorm that hit George Washington’s army at Jockey Hollow in Morristown, NJ, now a National Park that commemorates the Continental Army’s winter encampment (December 1779 – June 1780). Here the soldiers survived the tail end of what historians and paleoclimatologists dub, “the little ice age”.   Continue reading “Pins on the Map: George Washington Slept Here”

Space Between My Ears – The Geography of Cars

2010 - PGP - XK120 Front - small

I love cars. I’m a proper gear head, or petrol head if you’re in the UK.  Each and every year, I eagerly gobble up all the new cars news from the various car shows from around the world.  Designers and engineers are always tweaking this and playing with that, trying to eek out more power, better fuel economy, and prettier designs to get the public to buy.  And I can’t get enough of every little change, every little evolution, every revolution of car design and technology.  It isn’t just the supercars that cost 80 bagillion dollars and I couldn’t hope of buying sans a really lucky lottery ticket.  I dig the small cars that designed for the intro market (current object of fascination – Fiat 500 now available in the States).  I dig the cheap rear wheel drive sports cars for the people under a tight budget (I lust after the new BRZ/FR-S/GT86).  Even the idea of a returned Ford Ranger pickup gets my heart a racing.  I just love all cars.

As much as I love all these new cars and their ever greater technology, my real car passion is in the classics.  Every year for the last three years, you’ll see my ESRI User’s Conference badge reads, “Ask me about classic cars”.I especially have a weakness for classic British cars, particularly the roadsters.  I can talk for hours about the average MGB roadster (heaven help you if you get around ESRI’s Elvin Slavik and myself when the subject of MG’s come up).  I’d almost give a toe for the chance to own an old Mini at a decent price. That doesn’t keep me from loving other cars from other countries.  The Camaro is clearly a thing of beauty.  The People’s Car, despite it’s questionable heritage, is a marvel of engineering.  Honestly, how can you not be impressed by a car you can remove the engine in under 2 minutes with no power tools?  For that matter, I’ll always lust after a VW GTI Mk1, the originator of the ‘hot hatchback’.  If there’s someone that isn’t just blown away by the sheer beauty of the Ferrari California GT Spyder, I’m not sure I want to meet them.  The ‘49 Shoebox Ford is just such the definition of classic it practically screams to be in a parade or at least on a long Sunday drive.  Is there anything more American than the classic Ford F100 truck?  Or anything more British than the Land Rover Series I, II, and II (except, of course, the sexist car that ever existed)?

Continue reading “Space Between My Ears – The Geography of Cars”

Space between my ears: You Have To Be Flexible

For the last few years, I’ve been doing all my interactive mapping development using ESRI’s Flex API. If you know me really well, you’d realize that’s a pretty big deal. No, wait, that’s the mother of all big deals. Now I should give a little background – no less than five years ago, I once declared that anyone who develops in Flash should be punched in the head, kicked down a hill, and be required to write Assembly for the rest of their professional lives using nothing more than an ZX81 Sinclair Computer. You’d be hard pressed to find a bigger Flash hater than me. Yet a couple of years ago, I found myself turning to the thing I hated most to get my day to day job done. The transition wasn’t easy. I sneered and drug my heals the whole time I was learning it. I said all sorts of bad things about Adobe, about Adobe’s developers, about ESRI’s choices in business partners, even about myself. I took long showers in the morning to attempt to keep the ‘Flash stink’ off of me. I even started avoiding Jesse in the halls because he would just sigh sadly and shake his head (Sue was ok to talk to because she was doing C#, so she knew all about programming in languages nobody likes or uses :). Honestly, it was a mess.

Here’s the thing that really took me by surprise – I found out I kinda like Flex. Strike that – I actually really like Flex. Once I got used to its particularities, the code was actually fairly elegant and simple. Skinning is really cool. I love the theoretical ability to separate form from function. I say ‘theoretical’ because in practice I’ve noticed people tend to just bundle the two together. I like the mix of XML based approaches and actual scripting. The modularization is rather nice. The ability to ‘draw’ my components on the screen is unlike anything I’ve ever used before in web development. I really like the fact there are multiple ways to do things. If I can’t get the skinning to work, I can turn to CSS to get the job done. Don’t get me wrong, there’s a bunch that annoys me about Flex. For instance, datagrids just suck. They’re ugly and annoying and I hate them. And who’s bright idea was it make it impossible to access a pure database via Flex directly? I gotta drop to another language to pull it off? Annoying. But that’s not really the point of this piece. The point is that I found out Flex was actually really powerful and allowed me to quickly create Rich Internet Applications with little to no cross browser testing.

Allow me to digress for a minute and underscore that last point – little to no cross browser testing. Anyone who has developed for the web will tell you the biggest pain in the rear is getting it to work the same way on different browsers. For whatever reasons, the browser people can’t come together and agree on one rendering engine. Really, that’s all we ask as developers – make the thing work the same in all browsers. Is that too much to ask? Every minute of time I spend trying to get things to work the same in four different browsers makes me want to send a bill for my time to Microsoft, Google, Mozilla Foundation, and Apple. I’m doing their work and I’m not happy about it. But as I said, I digress.

Where am I going with all of this? The point I’m trying to make is that we developers have to remain flexible in our approaches. Right now, the whole area of Rich Internet Applications is in turmoil. Silverlight looks abandoned, Flex has been pretty much tossed aside by its creator, Adobe, and HTML 5 isn’t even technically approved as a standard until 2014. We are kind of caught jumping from one cliff to the other. A whole lot of people are talking about HTML 5 as the future, but we aren’t there right now. As of this writing, I have three browsers on my computer. When I do an HTML 5 test (http://html5test.com/), Chrome version 21 gets 437 points out of 500 for HTML 5 compatability. Firefox 15.0 gets 346 points. Internet Explorer 9 gets a depressing 138 points. You can easily see how developers are going to have to fall back into cross browser testing hell fairly quickly. That saps development time and resources that are in scarce supply. My ultimate point is that HTML 5 still isn’t prime time.

Muddying the waters even further is the whole issue of mobile. I’m not sure I’ve been in a development meeting anytime in the last year and mobile wasn’t brought up at least once. It’s out there and it’s something with which we have to contend, like it or not. Do you go HTML 5 on a mobile browser so it works everywhere (in theory)? Do you go native? Do you sink the resources to do Android AND iOS, never mind leaving people like Sue running Windows Phone out in the dark?

The truth is there is no silver bullet for today’s Rich Internet Applications, mobile or otherwise. Everyone is rightfully scared their platform of choice, be it Flex or Silverlight or any other, will go the way of Cold Fusion as a ‘used to be useful’. Recent trends in web development have underscored Responsive Web Design, which is designed to adapt your application to whatever form factor the user is using, be it a small hand held, a tablet, a desktop, or even a large CAVE environment. It’s a noble goal and I’m sure lots of people doing Responsive designs will tell you the day to day constraints of budgets and timelines have a major impact on how ‘responsive’ the final project is capable of becoming. I believe we need to adopt a Responsive Mapping Design. Certainly there are steps in this direction – the expansion of web standards to include rather advanced features like location, multimedia, or storage functions; the expansion of frameworks like dojo or jquery to make legacy technologies like javascript more responsive; and the use of more capable and ever evolving APIs, such as ESRI’s web APIs, Google’s, or social media stores like Twitter, Flickr and the like. That all helps. But again, they’re no silver bullets.

What they do is underscore the importance of responsiveness in current and future Internet Mapping projects, and by ‘responsive’, I’m referring to developers and developing environments. We have to remain ever responsive because the world in which we work is ever changing, perhaps more so than ever before. The skills we learn today may serve us little beyond the abstract tomorrow. And that’s pretty darn scary. But it’s also pretty exciting. It’s a lot of work keeping up, remaining flexible, and responding to changes. It’s also kinda rewarding.

Lost in the Virtual Fog – …And a Lean to the Right

lvfJan2013image2In the first half of my column a while back on the changing ways in which we interact with our computers, I focused on touch and its increasing adoption as an interface of choice, driven by the rapidly growing use of smartphones and other mobile devices. But as computers and digital information weave themselves deeper into the fabric of our daily lives, there is growing interest in new ways to interact with them, and an equally growing number of research projects, prototypes and even consumer products that are focused on making these interfaces a reality.

Microsoft’s Kinect is a great example of a new user interface device that is picking up traction, first in the gaming console space, but increasingly getting attention in the broader computing world. The Kinect interface is based on tracking the user’s body and movements, which Microsoft refers to as a Natural User Interface (NUI) or Human Interface. So far, there have been a number of commercial and indie video games that have utilized the Kinect, with varying degrees of success, but the real excitement, I think, with the Kinect is that Microsoft is not only allowing, but encouraging, the community of users and developers to come up with new ways of using the Kinect to interact with the computer by offering an API to allow anyone to develop an application that uses the Kinect sensor. I’ve begun working with the Kinect API myself, and can already envision lots of ways that the interface could be used for virtual world navigation and interaction.

At this year’s recent E3 Conference, Microsoft gave more detail on their vision for how our human-computer interaction mechanisms will evolve with their presentation on Project SmartGlass, which includes Kinect navigation for Xbox Live entertainment in addition to the use of the Kinect as a game controller. Now, you don’t need to hold an interface device, like a controller or mouse, in your hand, and your body itself becomes the mechanism for you to interact with your computer. Now you can use hand motions, head movements, and yes, even a lean to the right or left, to execute commands on your computer.

In addition, some might not know that the Kinect offers audio input capabilities as well, meaning that you can create a custom voice commands for interacting with your computer, your game console, or really any other computing device. This can be a powerful interface combo, as you can use voice commands to navigate menus, for example, and combine that with gestures or tracking hand movements to create a complex navigation scheme for a wide range of software applications.

There are limitations, of course, including the sensitivity of interface devices like the Kinect. How does it know whether I want to lean forward to move my avatar or whether I just sneezed and fell forward? There is still a lot of work to be done in the areas of ergonomics, gesture libraries, and other technical issues that can negatively affect a user’s experience with natural interfaces and lead to their rejection as a preferred alternative to the comforting familiarity of the keyboard and mouse. And audio input can suffer from similar issues of precision, as a multitude of users have experienced with that iPhone-encased voice vixen Siri.

And then there’s the issue of how to change people’s perceptions and habits, especially in the working world. Kinect for Windows, a Kinect device which is calibrated to allow the NUI to work for desktop computing applications, is designed to encourage developers and users to think about incorporating gesture-based interfaces and audio into everyday software applications, including productivity software like word processing programs and data and spreadsheet tools. But the complex software interfaces full of menus, buttons, and text will be a tough nut to crack for broad adoption. But, think of all of the advantages if it can. With all of those studies that suggest spending our days sitting down in front of a computer is shortening our lifespans, think about how a natural user interface could get us up on our feet and make collaborating around a computer or working together with our mobile devices projecting on to all kinds of surfaces. Minority Report, Total Recall, you name it, we’re seeing the tech in prototypes from projects all over the world.

Why should we in the geography and geospatial community be interested in what’s going on with these new technologies in human-computer interaction? On a general level, there is an inexorable movement toward the demise of the mouse and keyboard as the monopolistic gateways to our relationship with our computer. As a familiar analogue to the typewriter, the keyboard has served us well as we have grown up and into our professional lives in front of a computer. But its analog physicality is now limiting in a world where there’s a race to build and distribute faster, lighter, thinner, smaller, more powerful computing devices that can travel with us anywhere. It’s also limiting when new interfaces like an NUI mean that we could gather around our monitor or projection screen with our GIS open and have discussions, with multiple users moving the map view around, adding and removing layers, performing analyses, etc., all while comfortably interacting and not tied to a physical input device. With that visual of the geospatial collaboration lab I now want to build for my teaching and research, I’ll sign off for now and wander off into the virtual fog until next time.

All Tied Up: The 40 Year Old Geographer

OK, this month I have reached the second (or fourth, depending on your reckoning) milestone in terms of birthdays. The first, of course, is 21, where in the US you are legal to do pretty much anything except rent a car. The other two, if you are using the looser reckoning, are 16 (OK to drive on your own) and 18 (you are an “adult”).My preferred second milestone doesn’t really come with anything added to your capabilities, and I am pretty sure that for me it will be just any other day (I will be sure to tweet if something spectacular occurs). This month, I turn 40.

Normally, I wouldn’t talk about my age, I don’t even have my birthday listed on Facebook, but this year has me thinking. I am younger than Esri and Intergraph by a few years, older than Imagine and ENVI by the same number of years, and I am only a few months behind the launch of the Landsat program. And this is really where my forty years comes in. As I look at the Landsat program I can see the life moments that it has gone through: the starts and stops, the reimaginings, the times when perseverance was all that kept it going, and the times that everything came together.

Unlike most of us when we reach 40, however, the Landsat program can look back at its life so far and see that it has changed the way we look at the world. And with continued support beyond next month’s launch of the LDCM, we can hope that the program is nowhere near middle-aged (fingers crossed for no looming midlife crisis…for Landsat or me).

Looking back at the Landsat program, it began as most, a newborn that no one was sure where it would go. By the time it was three, it gained a new awareness of the world (Landsat 2 – 1975), but continued grow. Just after it turned five, Landsat showed its precociousness (Landsat 3 – 1978) with the addition (though short lived) of a thermal band. By the time the program turned ten, Landsat was ready to move to middle school (Landsat 4 – 1982). It kept many of its friends (MSS) from its previous school, but made new friends (TM) as well and was getting better at seeing what was around (30m max resolution). As junior high came around (yes, we used to have those in North Carolina at least) the program hit its stride. It wasn’t necessarily the most popular kid in school, but all of the geeks thought it was great (Landsat 5 – 1984) and was willing to work hard.

In the college years, Landsat had some setbacks, it tried to reach orbit, but it just didn’t happen (Landsat 6 – 1993). In fact, it was kind of shaken by this and didn’t try again for a few years (Landsat 7 – 1999). It made it this time, complete with new glasses (ETM+). It found its stride; it wasn’t just the geekiest of folks looking at it anymore, it began to make new friends and was always willing to share. The Landsat program had some problems with technology and financing like anyone in the 2000’s, but the program stayed its course and persevered, with its two trusty workhorses, Landsat 5 and Landsat 7. Right now, it is overcoming some issues, but looking at the potential for a great promotion this year.

In this description, I can see any number of people that I grew up with. The only difference is that most of my friends haven’t switched whisk-brooms for push-brooms, or even want to deal with brooms I would guess.

On a side note, I can imagine how excited and nervous the Landsat team is right now with less than a month to go before the long-awaited LDCM launch, and not much more before the highly anticipated first test images are released, and I hope they remember that they have the support of a community that continues to grow and relies on their great efforts and achievements.

Pins on a map: Networks and Nostalgia

I was exploring the 2013 International Consumer Electronics Association conference website and didn’t get further into it then the 2013 Best of Innovations Awards when nostalgia hit. The first category: Computer Hardware & Components lists Moneual’s Touch Table PC as a tool for restaurant patrons to order, entertain themselves, and pay their bill. As someone who grew up in an area surrounded by diners and hamburger joints (NJ is the diner capital of the world), I fondly remember Mr. Bee’s hamburgers with fake bee hives under glass tables and Ms. PacMan table top games. Attendees at the annual ESRI International Conference in San Diego will probably relate it to the fun themed table tops at The Cheese Shop Deli near the conference center.  The concept of a touch table top for a diner is both innovative and comfortable at the same time, because it is built upon diner history for diner’s providing entertainment and often advertising space on table tops.

I started to reflect on how many of the innovations that the CES highlights this year, such as touch tables, tablets, and 3D visualization feel more comfortable in their environments than the technology that led up to them didn’t. Is one of the signs that a technology has matured that it blends in with its surroundings and daily life? According to the Wall Street Journal, restaurant chains are testing out using small, interactive computer screens at the table. They have found that diners tip more given electronic choices. Some restaurants are even making a profit on table top game apps. The arguments that the apps take away from the dining experience also feel nostalgic, they were the same arguments being used to argue against table top arcade gamappinmes in pizzerias.

3D Visualization was another CES feature technology that had a nostalgic feel because of arguments being made outside CES cautioning against the detrimental impacts of 3D visualization, such as  software creating external corporate control of cities, film making, or other entities that use the technology.  While the technology is new, many of the critiques of it, even when valid, feel retread. The same issues were raised with the use of innovations like office software tools and technicolor movies. Other new technologies that had a nostalgic feel were touch tablets that can be extended to work with one or more monitors to create better workflow. I often use my iPad as an electronic document holder and find that many of the principles for using it are a carry over from word processors and type writer ergonomics. Of course, the most technologically nostalgic device is one like the USB typewriter that modernizes old technology. Even the argument against “retro technologies” that try to fit new technologies into old boxes as nostalgic wallowing applies to today’s Generation Flux and yesterday’s Lost Generation.

A recent article in The Economist delves into the feeling of technology nostalgia, “Has the ideas machine broken down?: The idea that innovation and new technology have stopped driving growth is getting increasing attention. But it is not well founded.” It discusses the fact that this overall feeling of stagnancy and incremental rates of innovation has been going on for decades.  According to the article, this is partly a factor of perception, starting point, and growth rate.  In today’s society, our technological imaginations are bigger than our own reality. This might explain why criticisms of new innovations feel so similar to arguments about older technologies. An User Interface Engineering article, “Designing What’s Never Been Done Before” sums it up by saying that we are often designing new solutions for old or existing problems. An Entrepreneur article, “The New Trends and Technologies Driving Design“, written almost a year ago in February 2012 states that incrementalism and nostalgia are a manufactured part of the design process or a built in constraint.  We have to face facts, in the life cycle of the  grand technological growth curve there isn’t much difference between a manual or electronic document holder.

I haven’t decided if this nostalgia reflects more on my age or the age we live in, but I’ve decided to take the approach of Terry Pratchett, “It’s still magic even if we know how it’s done” or even if it feels like we’ve been here before. I look forward to the new age of space exploration, which Wired magazine’s “Almost Being There: Why the Future of Space Exploration Is Not What You Think”  is trying to tell me won’t live up to our imaginations because technology has transformed space exploration beyond our ken. We sociologically and psychologically feel uncomfortable with modern space travel because it exceeds mental images we have built as a society.  Maybe it is good to have innovations that challenge us intellectually and technologically as a society. I don’t know if Marvin Minsky was partially joking when he chided NASA for being old fashioned and creating 10 year olds jaded of space technology, but I like to think so, because I can’t believe the magic has gone out of technology and innovation yet. Maybe the nostalgia and maturity I am referring to is actually ennui and jadedness that will be overcome when the next big leap happens to wow us in our lifetimes. I hope it happens soon.

 

Space between my ears – Sometimes We Need A Grade School Refresher

Occasionally there is a national news item that bubbles up to take headlines and starts a dialogue about a formerly fringe-ish topic. This week, there were two. In order of occurrence, the first is that an Italian judge has declared six scientists criminally negligent in predicting an earthquake and has sentenced them to six years in jail for manslaughter. The second is a US Presidential candidate thinks Iran is connected to Syria and that connection is what gives Iran a link to water (and thus shipping). Those two might not look connected, but they are. Let me take them in reverse order.

Let’s not get all political here about the relative merits of one party over another in US politics. We at VerySpatial don’t profess to know the intricacies of political policy, economics, foreign policy, and all the other issues surrounding US Presidential Politics. But we do profess to know at least thing or two about Geography, which apparently one candidate forgot. Mitt Romney said, “Syria is Iran’s only ally in the Arab world. It’s their route to the sea.” This is false as Iran has miles and miles of coastline, not to mention it isn’t even connected to Syria. You have to go through Iraq to get there, and its not like Iran and Iraq have a history of being buddy buddy. Obviously this is a major gaffe from a Presidential candidate, but the real issue here is that a surprising large number of journalist and general public do not even recognize it as such a gaffe. An error, sure, but do they think it is a big error? Not nearly as much. I think this speaks more to the lack of geographic literacy in the US as much as anything. More deeply, I think it speaks to the apathy of facts prevalent in US public discourse today. We need to remember that facts are the bedrock under which decisions are made. If we can’t get our facts accurate, how can we expect to make decent decisions or analysis?

This leads to the second news item, which is that Italy has convicted six scientists of manslaughter for failing to predict an earthquake. The punch line to the story is that these six scientists were unable to predict an earthquake and, in the eyes of the court, they failed to adequately predict the degree of danger and are therefore legally culpable. The obvious problem here is that earthquake prediction is a tricky endeavor at best. There are so many variables to contend with in earthquake prediction – time, location, magnitude – and each of those has so many sub-variables true earthquake prediction basically is a bit dodgy (editor – as the Itialians should say, impossibile). Further complicating this process is the fact that it is almost as bad to falsely predict an earthquake is going to happen as it is to fail to predict an earthquake is going to happen. Call it the ‘Boy Who Cried Wolf’ effect, if you will. The reality of earthquake prediction today is that we are simply ill equipped to adequately predict the future, only measure that which has already happened. Facts are important in this case, but we also have to know the limitation of facts. We have to have a good idea of what a fact is capable of telling us and what it isn’t. In my opinion, the Italian judge in charge of this case has made a grievous error in assuming facts that simply aren’t there, or at least aren’t predictable from known facts. How can we make decent decisions or analysis if we can’t understand the limits of what we know?

Pins on a Map: Geospatial Here, There, and Everywhere

When we were brainstorming what my column title and topical area should be, everyone knew that it had to be like my posts – seemingly unrelated but always connecting back to the geospatial. The titles we tried out were All Over the Map; funny because it was so accurate, Pens on a Map; which was a great visual, and Pins on a Map. I chose Pins on a Map because I felt that I am pinning down the geospatial in everyday life around the world – “Pin Pointing” the geospatial in people’s lives and professions, if I can use another pun.

But my pin pointing doesn’t stop at VerySpatial, I am lucky to have jobs where I work on interdisciplinary projects and meet a cross section of people from different countries, professions, and ages. Throughout my day when someone says something like “I only have 5 pages to get my point across for this grant and I need to fit in my ROI and demographics”, I point out that what they are asking for is a great visual analysis or a map. I then put them in touch with the appropriate GIS team and encourage them to get GIS training. If I am working with capstone students who are trying to boost their resumes, I make sure to mention the university’s Esri site license that allows them free access to Esri online training courses. Later that day when I am talking to someone about city planning, I will talk about participatory GIS and community projects.

I learned about geospatial concepts because, even though I wasn’t a geographer or working in a typical geospatial field, someone took the time to explain them to me and to let me know how they impacted my everyday life. This made me realize that although I might not have always known the correct terminology, there was a spatial perspective to my work and interests. Like many of the discussions Sue, Jesse, and Frank have on VerySpatial, I didn’t realize I was a geographer at heart until someone pointed out to me what geography meant in the real world. We live in a geospatial world and many people don’t realize it. I think one of the best ways to address this is to point out the geospatial when we see it and to let people know, “Hey! You might not realize it, but you are using geospatial concepts and geospatial technologies”

This is a big selling point because I have found that a lot of people using these concepts in their day-to-day jobs are in fields like education, business, and service industries, that don’t think of themselves as being in a field that uses technology/science. For example, SEO content writers often play an integral role in location-based services and don’t even realize it. The benefits of knowing about geospatial concepts and GIS go beyond the impact of the analysis or project. I know several people who said that just knowing that GIS was out there and how people used it helped them in a job interview. Each time I hear that I mentally put another pin in the map I keep in my mind.

I think that all geospatial professionals are putting pins on a map as they go throughout their day and that these pins all connect up to create general geospatial awareness. The key to raising awareness is consistency and coverage. You don’t have to use a power point or give a big presentation; sometimes it’s the little reinforcing comments that pin something down the most.

Thanks! Sue, Jesse, and Frank for placing my pin on the map when you first pointed out – “Hey, You know that thing you are trying to do, it’s a spatial concept.” Throughout my columns I will explore geospatial concepts and technologies in different forms.

Space between my ears: There’s A Quality About This Stuff…

I love to cook. My wife says I’m pretty good at it, although that could just be an attempt to not have to cook herself. We don’t have cable so I don’t get to watch many cooking shows. However, occasionally when we travel, I get the Food Network… then I become a couch potato. I’m hooked on watching people compete by preparing different dishes in different ways. I especially love the ‘random box of stuff’ sort of competition where the chef needs to work out something delicious from a collection of ingredients. That’s some real creativity there.

What I don’t love about these shows is the judging part. This collection of experts sits down and looks at their food, smells their food, and tastes their food rendering a ‘this is better than that’ decision. Now I’m no expert (I’m hardly a decent cook, much less a chef or a food expert), but it all seems so arbitrary to me. However, I’ve noticed a bit of linguistic turn coming into the descriptions from the judges. There are what I’d dub ‘axis’ words, such as ‘acidic’, ‘sweet’, ‘sharp’, and ‘bright’. Then there are more descriptive words that seem to modify that a bit, such as ‘tangy’, ‘flavorful’, ‘steep’, ‘heavy’, ‘light’, etc. Finally, there are some comparison words used, most normally ‘balance’ and ‘counter’. So you might hear a description such as, “I like that you balanced the heavy sweetness with a flavorful acidic quality.” To my quantitative ear, it almost sounds like these professional chefs and judges have this sort of equation in their head that moves along the axis of acidic, bright, and sweet. It is as if there is an attempt at an extremely loose quantification of what is basically a qualitative thing. How do you impose a sort of equation on what’s basically opinion? I don’t like raw tomatoes, so anything with raw tomatoes is going to trend ‘negative’ for me personally despite their utility in providing an acidic quality to the food equation. The point being, any one person’s mileage may vary here.

Now let’s jump to geography and GIS. Lately I’ve been straddling the divide between qualitative and quantitative data. We like quantitative data because you can measure it, you can compare it, you can mathematically transform it, if it relates to space you can map it, you can color it…. you can do all sorts of things to it. Not only that, we can represent that in known ways with agreed upon conventions. Things like gradients or relative shape sizes have been reasonably well worked out. We tend to know what to expect, and in fact we recoil when they’re not what we expect. Quantitative data doesn’t present us with much challenges. Qualitative data is a whole ‘nother critter. We don’t always know what to do with qualitative data. We can’t even agree if it has much utility or not (editors note: WHAT? Blasphemy!). We certainly don’t know how to store it in transformable forms, or even how we’d like to transform the information. We don’t know how to compare it, or even if it is comparable. And the issue of representation? That’s so far out there and varied it’s barely on the radar.

You can certainly see why judges seem to be attempting to ‘quantify’ these qualitative measures. With numbers you can work out a ‘winner’ – feelings, emotions, tastes are a LOT harder because of their subjectivity. However, I think we lose so much when attempt to boil down these complex flavors and aromas into a comparable framework. How can we capture that information and still retain the ability to compare, contrast, and represent in agreed upon ways? I’m going to do the thing I hate the most and cop out because I don’t have an answer. To be honest, I don’t think there is ‘an’ answer but a series of answers we have to work out.
Continue reading “Space between my ears: There’s A Quality About This Stuff…”

Lost in the Virtual Fog – Just a Swipe to the Left

It’s been quite a while since my last column in this series, and a lot has happened in the geospatial world and the world of computing in general. I hope to give my thoughts on some of these trends over the next few months as I catch up to the world around me after finally finishing my PhD. One of the trends that I have been following with a lot of interest is definitely the move toward new ways of interacting with our computers.

To kick things off, I wanted to talk a little bit about what’s been going on with the growing presence of touch interfaces. While the keyboard and mouse still reign supreme in desktop computing, the success of the iPad and other tablets, as well as smartphones, has definitely broadened the reach of touch as a user interface. And that is filtering its way back into the desktop computing space, with the rise in popularity of all-in-one computers with touch-enabled monitors. In fact, I am writing this post on one of those new touch all-in-ones, the Lenovo A720.

I am finding my own computing behavior changing as well. For my day-to-day work and web surfing, I rely on my Windows slate tablet (ASUS Eee Slate EP121), which is touch-enabled as well. Most of the devices I interact with on a daily basis are touch interfaces and I’ve become so used to it that I often find myself touching a laptop or desktop monitor now, and wondering why nothing is happening.

So why is touch such a big deal? Because more and more of the devices that we either use now or are going to use in the near future (think smartphones and tablets) rely on a touch interface, and that means that software applications, even expert software like GIS or 3D virtual landscapes, will need to be touch-compatible if they are going to make the transition to new hardware platforms. Even more importantly, software users who haven’t worked with touch are probably going to have to come to grips with this new interface style. Some people take to touch quite quickly and intuitively, while others are going to struggle a bit.

On the developer side, writing applications with touch capabilities presents challenges, such as precision with finger movements and touch pressure and creating meaningful gestures for complex commands. When you’re talking about a complex series of tasks like working with map layers in a GIS, for instance, it can get a bit tricky. Still, we’ve already got the example of ArcGIS for iOS which is available on the small iPhone screen and the larger iPad and works quite well with a number of touch gestures. However, it’s quite a leap from the streamlined lightweight mobile apps to a full-fledged desktop GIS software package, which might require a real rethinking of how users interact with the various modules, viewers and tools to get a satisfying touch interface working.

So, even if you’re not a user of a touch device now, you may find that changing in the near future. Computing platforms and interfaces are changing whether everyone likes it or not and, while I don’t think keyboards and mice are going away anytime soon, in the world of technology they’ve been around for ages and may find themselves going the way of the punch card.