Space Between My Ears – The Geography of Cars

2010 - PGP - XK120 Front - small

I love cars. I’m a proper gear head, or petrol head if you’re in the UK.  Each and every year, I eagerly gobble up all the new cars news from the various car shows from around the world.  Designers and engineers are always tweaking this and playing with that, trying to eek out more power, better fuel economy, and prettier designs to get the public to buy.  And I can’t get enough of every little change, every little evolution, every revolution of car design and technology.  It isn’t just the supercars that cost 80 bagillion dollars and I couldn’t hope of buying sans a really lucky lottery ticket.  I dig the small cars that designed for the intro market (current object of fascination – Fiat 500 now available in the States).  I dig the cheap rear wheel drive sports cars for the people under a tight budget (I lust after the new BRZ/FR-S/GT86).  Even the idea of a returned Ford Ranger pickup gets my heart a racing.  I just love all cars.

As much as I love all these new cars and their ever greater technology, my real car passion is in the classics.  Every year for the last three years, you’ll see my ESRI User’s Conference badge reads, “Ask me about classic cars”.I especially have a weakness for classic British cars, particularly the roadsters.  I can talk for hours about the average MGB roadster (heaven help you if you get around ESRI’s Elvin Slavik and myself when the subject of MG’s come up).  I’d almost give a toe for the chance to own an old Mini at a decent price. That doesn’t keep me from loving other cars from other countries.  The Camaro is clearly a thing of beauty.  The People’s Car, despite it’s questionable heritage, is a marvel of engineering.  Honestly, how can you not be impressed by a car you can remove the engine in under 2 minutes with no power tools?  For that matter, I’ll always lust after a VW GTI Mk1, the originator of the ‘hot hatchback’.  If there’s someone that isn’t just blown away by the sheer beauty of the Ferrari California GT Spyder, I’m not sure I want to meet them.  The ‘49 Shoebox Ford is just such the definition of classic it practically screams to be in a parade or at least on a long Sunday drive.  Is there anything more American than the classic Ford F100 truck?  Or anything more British than the Land Rover Series I, II, and II (except, of course, the sexist car that ever existed)?

Continue reading “Space Between My Ears – The Geography of Cars”

Space between my ears: You Have To Be Flexible

For the last few years, I’ve been doing all my interactive mapping development using ESRI’s Flex API. If you know me really well, you’d realize that’s a pretty big deal. No, wait, that’s the mother of all big deals. Now I should give a little background – no less than five years ago, I once declared that anyone who develops in Flash should be punched in the head, kicked down a hill, and be required to write Assembly for the rest of their professional lives using nothing more than an ZX81 Sinclair Computer. You’d be hard pressed to find a bigger Flash hater than me. Yet a couple of years ago, I found myself turning to the thing I hated most to get my day to day job done. The transition wasn’t easy. I sneered and drug my heals the whole time I was learning it. I said all sorts of bad things about Adobe, about Adobe’s developers, about ESRI’s choices in business partners, even about myself. I took long showers in the morning to attempt to keep the ‘Flash stink’ off of me. I even started avoiding Jesse in the halls because he would just sigh sadly and shake his head (Sue was ok to talk to because she was doing C#, so she knew all about programming in languages nobody likes or uses :). Honestly, it was a mess.

Here’s the thing that really took me by surprise – I found out I kinda like Flex. Strike that – I actually really like Flex. Once I got used to its particularities, the code was actually fairly elegant and simple. Skinning is really cool. I love the theoretical ability to separate form from function. I say ‘theoretical’ because in practice I’ve noticed people tend to just bundle the two together. I like the mix of XML based approaches and actual scripting. The modularization is rather nice. The ability to ‘draw’ my components on the screen is unlike anything I’ve ever used before in web development. I really like the fact there are multiple ways to do things. If I can’t get the skinning to work, I can turn to CSS to get the job done. Don’t get me wrong, there’s a bunch that annoys me about Flex. For instance, datagrids just suck. They’re ugly and annoying and I hate them. And who’s bright idea was it make it impossible to access a pure database via Flex directly? I gotta drop to another language to pull it off? Annoying. But that’s not really the point of this piece. The point is that I found out Flex was actually really powerful and allowed me to quickly create Rich Internet Applications with little to no cross browser testing.

Allow me to digress for a minute and underscore that last point – little to no cross browser testing. Anyone who has developed for the web will tell you the biggest pain in the rear is getting it to work the same way on different browsers. For whatever reasons, the browser people can’t come together and agree on one rendering engine. Really, that’s all we ask as developers – make the thing work the same in all browsers. Is that too much to ask? Every minute of time I spend trying to get things to work the same in four different browsers makes me want to send a bill for my time to Microsoft, Google, Mozilla Foundation, and Apple. I’m doing their work and I’m not happy about it. But as I said, I digress.

Where am I going with all of this? The point I’m trying to make is that we developers have to remain flexible in our approaches. Right now, the whole area of Rich Internet Applications is in turmoil. Silverlight looks abandoned, Flex has been pretty much tossed aside by its creator, Adobe, and HTML 5 isn’t even technically approved as a standard until 2014. We are kind of caught jumping from one cliff to the other. A whole lot of people are talking about HTML 5 as the future, but we aren’t there right now. As of this writing, I have three browsers on my computer. When I do an HTML 5 test (http://html5test.com/), Chrome version 21 gets 437 points out of 500 for HTML 5 compatability. Firefox 15.0 gets 346 points. Internet Explorer 9 gets a depressing 138 points. You can easily see how developers are going to have to fall back into cross browser testing hell fairly quickly. That saps development time and resources that are in scarce supply. My ultimate point is that HTML 5 still isn’t prime time.

Muddying the waters even further is the whole issue of mobile. I’m not sure I’ve been in a development meeting anytime in the last year and mobile wasn’t brought up at least once. It’s out there and it’s something with which we have to contend, like it or not. Do you go HTML 5 on a mobile browser so it works everywhere (in theory)? Do you go native? Do you sink the resources to do Android AND iOS, never mind leaving people like Sue running Windows Phone out in the dark?

The truth is there is no silver bullet for today’s Rich Internet Applications, mobile or otherwise. Everyone is rightfully scared their platform of choice, be it Flex or Silverlight or any other, will go the way of Cold Fusion as a ‘used to be useful’. Recent trends in web development have underscored Responsive Web Design, which is designed to adapt your application to whatever form factor the user is using, be it a small hand held, a tablet, a desktop, or even a large CAVE environment. It’s a noble goal and I’m sure lots of people doing Responsive designs will tell you the day to day constraints of budgets and timelines have a major impact on how ‘responsive’ the final project is capable of becoming. I believe we need to adopt a Responsive Mapping Design. Certainly there are steps in this direction – the expansion of web standards to include rather advanced features like location, multimedia, or storage functions; the expansion of frameworks like dojo or jquery to make legacy technologies like javascript more responsive; and the use of more capable and ever evolving APIs, such as ESRI’s web APIs, Google’s, or social media stores like Twitter, Flickr and the like. That all helps. But again, they’re no silver bullets.

What they do is underscore the importance of responsiveness in current and future Internet Mapping projects, and by ‘responsive’, I’m referring to developers and developing environments. We have to remain ever responsive because the world in which we work is ever changing, perhaps more so than ever before. The skills we learn today may serve us little beyond the abstract tomorrow. And that’s pretty darn scary. But it’s also pretty exciting. It’s a lot of work keeping up, remaining flexible, and responding to changes. It’s also kinda rewarding.

Space between my ears – Sometimes We Need A Grade School Refresher

Occasionally there is a national news item that bubbles up to take headlines and starts a dialogue about a formerly fringe-ish topic. This week, there were two. In order of occurrence, the first is that an Italian judge has declared six scientists criminally negligent in predicting an earthquake and has sentenced them to six years in jail for manslaughter. The second is a US Presidential candidate thinks Iran is connected to Syria and that connection is what gives Iran a link to water (and thus shipping). Those two might not look connected, but they are. Let me take them in reverse order.

Let’s not get all political here about the relative merits of one party over another in US politics. We at VerySpatial don’t profess to know the intricacies of political policy, economics, foreign policy, and all the other issues surrounding US Presidential Politics. But we do profess to know at least thing or two about Geography, which apparently one candidate forgot. Mitt Romney said, “Syria is Iran’s only ally in the Arab world. It’s their route to the sea.” This is false as Iran has miles and miles of coastline, not to mention it isn’t even connected to Syria. You have to go through Iraq to get there, and its not like Iran and Iraq have a history of being buddy buddy. Obviously this is a major gaffe from a Presidential candidate, but the real issue here is that a surprising large number of journalist and general public do not even recognize it as such a gaffe. An error, sure, but do they think it is a big error? Not nearly as much. I think this speaks more to the lack of geographic literacy in the US as much as anything. More deeply, I think it speaks to the apathy of facts prevalent in US public discourse today. We need to remember that facts are the bedrock under which decisions are made. If we can’t get our facts accurate, how can we expect to make decent decisions or analysis?

This leads to the second news item, which is that Italy has convicted six scientists of manslaughter for failing to predict an earthquake. The punch line to the story is that these six scientists were unable to predict an earthquake and, in the eyes of the court, they failed to adequately predict the degree of danger and are therefore legally culpable. The obvious problem here is that earthquake prediction is a tricky endeavor at best. There are so many variables to contend with in earthquake prediction – time, location, magnitude – and each of those has so many sub-variables true earthquake prediction basically is a bit dodgy (editor – as the Itialians should say, impossibile). Further complicating this process is the fact that it is almost as bad to falsely predict an earthquake is going to happen as it is to fail to predict an earthquake is going to happen. Call it the ‘Boy Who Cried Wolf’ effect, if you will. The reality of earthquake prediction today is that we are simply ill equipped to adequately predict the future, only measure that which has already happened. Facts are important in this case, but we also have to know the limitation of facts. We have to have a good idea of what a fact is capable of telling us and what it isn’t. In my opinion, the Italian judge in charge of this case has made a grievous error in assuming facts that simply aren’t there, or at least aren’t predictable from known facts. How can we make decent decisions or analysis if we can’t understand the limits of what we know?

Space between my ears: There’s A Quality About This Stuff…

I love to cook. My wife says I’m pretty good at it, although that could just be an attempt to not have to cook herself. We don’t have cable so I don’t get to watch many cooking shows. However, occasionally when we travel, I get the Food Network… then I become a couch potato. I’m hooked on watching people compete by preparing different dishes in different ways. I especially love the ‘random box of stuff’ sort of competition where the chef needs to work out something delicious from a collection of ingredients. That’s some real creativity there.

What I don’t love about these shows is the judging part. This collection of experts sits down and looks at their food, smells their food, and tastes their food rendering a ‘this is better than that’ decision. Now I’m no expert (I’m hardly a decent cook, much less a chef or a food expert), but it all seems so arbitrary to me. However, I’ve noticed a bit of linguistic turn coming into the descriptions from the judges. There are what I’d dub ‘axis’ words, such as ‘acidic’, ‘sweet’, ‘sharp’, and ‘bright’. Then there are more descriptive words that seem to modify that a bit, such as ‘tangy’, ‘flavorful’, ‘steep’, ‘heavy’, ‘light’, etc. Finally, there are some comparison words used, most normally ‘balance’ and ‘counter’. So you might hear a description such as, “I like that you balanced the heavy sweetness with a flavorful acidic quality.” To my quantitative ear, it almost sounds like these professional chefs and judges have this sort of equation in their head that moves along the axis of acidic, bright, and sweet. It is as if there is an attempt at an extremely loose quantification of what is basically a qualitative thing. How do you impose a sort of equation on what’s basically opinion? I don’t like raw tomatoes, so anything with raw tomatoes is going to trend ‘negative’ for me personally despite their utility in providing an acidic quality to the food equation. The point being, any one person’s mileage may vary here.

Now let’s jump to geography and GIS. Lately I’ve been straddling the divide between qualitative and quantitative data. We like quantitative data because you can measure it, you can compare it, you can mathematically transform it, if it relates to space you can map it, you can color it…. you can do all sorts of things to it. Not only that, we can represent that in known ways with agreed upon conventions. Things like gradients or relative shape sizes have been reasonably well worked out. We tend to know what to expect, and in fact we recoil when they’re not what we expect. Quantitative data doesn’t present us with much challenges. Qualitative data is a whole ‘nother critter. We don’t always know what to do with qualitative data. We can’t even agree if it has much utility or not (editors note: WHAT? Blasphemy!). We certainly don’t know how to store it in transformable forms, or even how we’d like to transform the information. We don’t know how to compare it, or even if it is comparable. And the issue of representation? That’s so far out there and varied it’s barely on the radar.

You can certainly see why judges seem to be attempting to ‘quantify’ these qualitative measures. With numbers you can work out a ‘winner’ – feelings, emotions, tastes are a LOT harder because of their subjectivity. However, I think we lose so much when attempt to boil down these complex flavors and aromas into a comparable framework. How can we capture that information and still retain the ability to compare, contrast, and represent in agreed upon ways? I’m going to do the thing I hate the most and cop out because I don’t have an answer. To be honest, I don’t think there is ‘an’ answer but a series of answers we have to work out.
Continue reading “Space between my ears: There’s A Quality About This Stuff…”

More things, less stuff

Like all of us, I’m a creature of habit. I start my day off with the obligatory gallon and a half of coffee and my normal web rounds to see what’s new since I signed off the night before. One of my favorite places on the web to hit is Ikea Hackers. I love the idea that people look at these pre-built objects not as end items, but as things that can be manipulated, moved, altered, added to, and… well… ‘hacked’ into new versions. I love to study the hacks, see if I can emulate them, see if I can extend them. I even start to look at individual hacks and see if I can hack a couple hacks together. It’s like a grown up version of Lego. The pictures on the website are like the pictures on the boxes of Lego – a suggestion of where to move forward. It just thrills me to no end.

Ikea hackers works because Ikea exists. I know that’s simplistic, but it has some serious implications. Someone has gone through the hassles and problem of making things that fit together in different ways. They figured out how those things can fit together. They made (technical terms alert!) the doohickeys that make the thingies fit into the what-da-ya-call-ems. Those things just work. An allen wrench, a screwdriver, and a few off color words and you can have a bookcase or even a bed. We have this base of objects that are designed specifically to work together in very specific and defined ways. Hacking those things becomes so much easier because it’s left to the hacker to envision ways in which these things that are designed to fit together separately, can be fit together. The hacker is effectively designing new interfaces to things that already have some well defined interfaces. On top of that, they throw in an aesthetic change that can ultimately change the whole product from top to bottom… transforming the ‘hack’ into a whole ‘nother critter.

So what does hacking Ikea furniture have to do with geography and geospatial technology? A lot, I think, specifically as it applies to newer forms of representation such as virtual reality, or serious games, or whatever term you like here*. We can think of the elements in Ikea as a raw product that can be adapted, combined, reconfigured, changed, or removed as necessary for a specific outcome. It isn’t left to hackers of Ikea furniture to create the raw products – Ikea has already done that for them. Nobody goes out to a saw mill, grabs some saw dust, glues it together under pressure, slaps some white scratch resistant sheets over that new pressboard, then drills holes to hold these metal connectors they hand forged with allen heads in them so the boards can fit together. Those already exist at Ikea, so why would you?

Unfortunately I think in the virtual universe, we’re still stuck at the raw materials stage instead of the raw products stage. We have to go out and make our virtual worlds from scratch – every line, every polygon, every bit of physics, nearly every bit of texture needs to be hand created. That puts a LOT of constraint on the uptick in the virtual, I think. Some of us simply don’t have the artistic chops to put this stuff together, and even those who do often don’t have the programming chops to build the world once the models are made. Sure, we can collaborate to get the skills we’re missing, but that takes a shared space to interact and a shared objective. I can program and want to study World War I trenches. You can build models and graphics, but you’re interested in religion in early America. Let’s call the whole thing off.

Admittedly there has been some movement toward making the raw products. Google just sold their 3D modeling software to Trimble, and Adobe and Autodesk maintain applications, for instance. The problem with these products is they focus more upon the model and less upon the process. That’s great for artistically declined people like myself, but not so great for the programmatically challenged. The methods and the process are missing. Then again, even if the model exists, it might not be malleable, either because of ability, license, or source material. To turn back to my Ikea analogy, I can set a bookcase on top of a table, but that’s not the same thing as ‘hacking’ the two together, now is it? For the hacking culture to spark, grow, and expand, there needs to be something to ‘hack’, not this nebulous mass of stuff we have to work into something usable.

How do we get there? I have no idea. Does there need to be an accessible corporate vehicle that encourages this sort of hacking, ie ‘VR Ikea’? Does it have to come organically from the community? Is it the intersection of the two? Where does the spark that kicks this off come from? The current attempts at answering these questions kinda feel like old carburetor cars that would get flooded when you try to start them. We’re kinda flooded right now in the move from creating everything from scratch to ‘hacking’. I can kinda see bits and pieces of the path from flooded to fully running and it excites me. I desperately want to go into a VR Ikea and grab this model and that model and this physics approach and hack something new and innovative and interesting. I can taste it. Then again, it could just be those Swedish meatballs I’m jonesing for… who knows?

*Jesse note: I will tackle these terms at the beginning of August