Lost in the Virtual Fog – A Question of Scale

bigMan1I have been remiss in not doing any diary entries for awhile, but I have been feverishly working trying to get my demo XNA application ready for the ESRI UC presentation. Finally today, I think I got the last bit of functionality on my list working, so I am pretty excited and crossing my fingers that everything will run right at the conference, Of course, you can never predict a live demo, so tonight I am recording a few videos of Virtual Morgantown in action, using a cool little software tool called GameCam.

Last month, we were happy to be able to go to Pittsburgh to cover the Game Education Summit, held at Carnegie Mellon’s Entertainment Technology Center. We got some nice interviews, including the conversation with ETC Pittsburgh Director Drew Davidson, which we featured on Episode 206 of the podcast. While I’m going to have another entry soon that will be specifically about some of my thoughts on the Game Education Summit, since I’ve gotten back, I’ve had to literally burn the midnight oil to get Virtual Morgantown looking and running the way I want it for this stage in the project. As I’ve been sitting here opening each model in SketchUp, cleaning up what I can, and exporting them to the XNA application as .X files (many, many thanks for Zbyl’s .X Exporter plugin!), I am continually reminded of the challenges in working at this scale after coming from a GIS background.

As you can zoom in and essentially immerse yourself at a nearly 1:1 scale in the virtual world, issues that never would have mattered suddenly become vital. Even Google Earth, World Wind, ArcGIS Explorer, any of the virtual globes aren’t really meant to be used at that scale, as their background imagery and 3D models look best from a viewpoint well above ground level. So, when you get down to the ground, and are actually representing real features, you have to give each one at least some individual attention. It’s often the opposite of the way most of us are trained. Rather than looking for commonalities and creating data layers that characterize those similarities, you have to bring out the aspects that might make a particular feature unique.

It’s a strange notion for a lot of geographers and GIS people, I think, to change their perspective from starting with a zoomed-out view of the world and then drilling down toward individual features to starting with the viewpoint of a single person in the world and then have to move and explore in order to identify and understand the nature of the virtual environment you’re immersed in. And, the more you’re drawn into the virtual world, the more obvious the individual differences become, and the more important it is for the creator of the simulation or interactive environment to pay attention to those small design details that help form a sense of actually experiencing the virtual world.

As I have progressed from childlike wonder and delight over my ability to create a simple XNA application with real-world terrain data, to relief when each one of my new functions actually builds and runs or when I get my model assets adjusted to just the right location and height, I am becoming even more of a believer in taking gaming technology and design seriously, and looking at how we can create virtual world applications that integrate aspects from many different areas, from gaming to GIS and geospatial to geography, and even history and other disciplines.

Lost in the Virtual Fog – making the interactive connection, part 1

keyhole1My dissertation proposal defense is finally over and, since I passed, it’s time to get back to work and really get the functionality on the Spatial Experience Engine ramped up. During my presentation and in the Q-and-A with my committee afterward, I kept coming back to the issue of interaction in the virtual world. I’ve been ruminating on this quite a bit over the last few months, once I got the basic terrain and model drawing stuff out of the way. I think the key to making all of this work and be compelling for users is of course the UI (User Interface).
Continue reading “Lost in the Virtual Fog – making the interactive connection, part 1”

Lost in the Virtual Fog – Seeing Really is Believing

digcity1So, this past week has been a flurry for me, as I’ve had to schedule and then reschedule my PhD dissertation proposal defense. But, finally it seems that we are set for next Tuesday morning, so now I have some time to catch up on my diary entry. I was contemplating what to write this week, when I was reminded this morning of something we’ve noticed since we started demonstrating our work in virtual worlds and especially the gaming technology: seeing really is believing.

We’ve been giving demos of our virtual world work in our VR CAVE this week, both the older ArcScene projects and the new XNA application, and I’ve been really happy with the response we’ve been getting from people in lots of different fields. One common theme with the visitors that I’ve talked to is that many of them have kind of heard about gaming technology, serious games, or virtual worlds, but they didn’t really get what people were talking about until they were able to see it actually working. Once they got over the initial WOW factor, it was fun to watch GIS professionals, city planners, and educators start brainstorming with ideas about these applications could be used for all kinds of projects. After all the frustrating hours spent hunched over a keyboard staring at a stupid 3D model and wondering why the roof isn’t square, or screaming at the computer because clearly it is incapable of understanding my perfectly written code, it is really nice to see that we are creating something that people appreciate and can see the value of.
Continue reading “Lost in the Virtual Fog – Seeing Really is Believing”

Lost in the Virtual Fog – visit from the ghost of grad students past

Working on the codeThis week’s entry is a little late, as I am burning the midnight oil trying to populate my virtual town with at least basic models so I can see what my performance is going to be with the full complement of landscape features and physics systems running. When we did the first generation of the project in ArcScene a couple of years ago, I totaled up just over 350 individual structure models. I am only about a third of the way done with importing these models into the game world, and I already have about 160 models (and about 40 trees, which will be about double when I’m done). That means I either miscounted the first time around, and we did even more work than I thought, or I have added some more structures this time around. Either way, this is not a small undertaking.
Continue reading “Lost in the Virtual Fog – visit from the ghost of grad students past”

Lost in the Virtual Fog – A diary of my triumphs and travails

Virtual Morgantown Winter SceneI’m sure many of you out there have noticed that I haven’t been blogging as much since the first of the year. I know, I know, no excuses, I should always make time for posting cool geography and geospatial-related content. I’ve been knee-deep in my research stuff, working on my prototype application, and getting a crash course in lots of neat topics (and plenty of boring ones, too!).

I’ve posted a few times about various stages of my project, which is now centered around the development of a prototype game-based engine for displaying and exploring virtual landscapes. It’s been a strange road for me to get to this point, but the more I work on this stuff, the more I find myself really feeling like I’m on to something. However, it hasn’t all been a bed of roses, as they say. I’ve had to teach myself a new programming language (C#), absorb tons of new concepts, like depth buffers, and view frustums, and particle systems, issues related to user interfaces and Human-Computer Interaction, and put all those technical bits together with concepts that are central to Geography and GIS. Now, I have to try to meld all that with EVEN MORE new ideas from the world of game design, and design in general.

Continue reading “Lost in the Virtual Fog – A diary of my triumphs and travails”

More VR CAVE demos – our XNA virtual landscape application

I have really slacked off on the postings on the blog while I work on my research stuff, but I’ve finally got some pictures of my XNA virtual world application up and running in the VR CAVE at WVU. We had to do some tweaking because XNA is DirectX-based, so it runs on a separate setup from the Conduit and doesn’t affect that configuration. The demo that you see in the photos is our Virtual Morgantown project, and we are slowing filling out the landscape by re-texturing all of our 350+ SketchUp models that were used in the 1st generation ArcScene project, and then exporting them to .FBX for use in the XNA application. So far, it’s running great, and we’ve already created several small scenes and even have weather particle systems running. Everyone’s favorite so far is the snowy Morgantown landscape!

Just a reminder that the CAVE utilizes stereo 3D, so the photos are a little blurry because they show the double images that are drawn to give the stereo effect.

XNA in CAVE

Closeup XNA in CAVE

Want your own supercomputer – just grab some PS3s!

Last year, scientists such as Physics professor Gaurav Khanna of UMass Dartmouth and Frank Mueller, a computer science professor at NC State, made news in tech and scientific circles by creating supercomputing clusters from Sony Playstation 3s. Their clusters have the same computing power as a small supercomputer, but the cost is only around $5000, compared to the millions that supercomputers generally cost.

Now, the new PS3Cluster Guide has become available online, and gives instructions and tips on how you can set up your own supercomputer with PS3s. Written by Khanna and his colleague Chris Poulin, the guide was developed as part of the Cluster Workshop project, which is being partially funded by the National Science Foundation. and was first announced and demonstrated at the 2nd Annual Georgia Tech, Sony/Toshiba/IBM Workshop on Software and Applications for the Cell/B.E. Processor.

So, get your spare change together and start supercomputing!

Via Physorg.com

UK government wants you to show them a better way

If you live in the UK (or are just interested in improving their access to data), you can now add your idea to a growing movement to make more public information available to the public. A competition called Show Us A Better Way, from the UK government’s Power of Information Taskforce, has just been announced which offers prizes of up to 20,000 pounds for ideas for new products that utilize public information. One of the examples of data sources that can be used are the Ordnance Survey maps available through the OpenSpace beta . Many of the other data sets available, such as Carbon Footprint data or FixMyStreet.com are also crying out for spatial solutions, so if you have an idea, be sure and submit it before the competition ends at the end of September. According to the FAQs, you do not have to reside in the UK, but all solutions would be implemented for use in the UK.

Via The Guardian

Ten Commandments of Egoless (insert what you do)

TheSteve0 tweeted this great article about programming in a team environment, but the ten points that are made transcend programming and can easily fit any team activity from a class project to sending a lander to Mars. With new/innovative workflows coming on the scene in the tech arena (eg the spread of AGILE) it is sometimes hard to look around at the insular structure of some working groups. Articles like this one give me hope that we can tip over the silos and all play together. Perhaps it is just the optimist in me. Here is my favorite:

3. No matter how much “karate” you know, someone else will always know more. Such an individual can teach you some new moves if you ask. Seek and accept input from others, especially when you think it’s not needed.

I definitely recommend that you head over and take a look at the article to see if it resonates with you as it did with me.

Builder.com Ten Commandments of Egoless Programming