Lost in the Virtual Fog – Seeing Really is Believing

digcity1So, this past week has been a flurry for me, as I’ve had to schedule and then reschedule my PhD dissertation proposal defense. But, finally it seems that we are set for next Tuesday morning, so now I have some time to catch up on my diary entry. I was contemplating what to write this week, when I was reminded this morning of something we’ve noticed since we started demonstrating our work in virtual worlds and especially the gaming technology: seeing really is believing.

We’ve been giving demos of our virtual world work in our VR CAVE this week, both the older ArcScene projects and the new XNA application, and I’ve been really happy with the response we’ve been getting from people in lots of different fields. One common theme with the visitors that I’ve talked to is that many of them have kind of heard about gaming technology, serious games, or virtual worlds, but they didn’t really get what people were talking about until they were able to see it actually working. Once they got over the initial WOW factor, it was fun to watch GIS professionals, city planners, and educators start brainstorming with ideas about these applications could be used for all kinds of projects. After all the frustrating hours spent hunched over a keyboard staring at a stupid 3D model and wondering why the roof isn’t square, or screaming at the computer because clearly it is incapable of understanding my perfectly written code, it is really nice to see that we are creating something that people appreciate and can see the value of.
Continue reading “Lost in the Virtual Fog – Seeing Really is Believing”

Lost in the Virtual Fog – visit from the ghost of grad students past

Working on the codeThis week’s entry is a little late, as I am burning the midnight oil trying to populate my virtual town with at least basic models so I can see what my performance is going to be with the full complement of landscape features and physics systems running. When we did the first generation of the project in ArcScene a couple of years ago, I totaled up just over 350 individual structure models. I am only about a third of the way done with importing these models into the game world, and I already have about 160 models (and about 40 trees, which will be about double when I’m done). That means I either miscounted the first time around, and we did even more work than I thought, or I have added some more structures this time around. Either way, this is not a small undertaking.
Continue reading “Lost in the Virtual Fog – visit from the ghost of grad students past”

Lost in the Virtual Fog – A diary of my triumphs and travails

Virtual Morgantown Winter SceneI’m sure many of you out there have noticed that I haven’t been blogging as much since the first of the year. I know, I know, no excuses, I should always make time for posting cool geography and geospatial-related content. I’ve been knee-deep in my research stuff, working on my prototype application, and getting a crash course in lots of neat topics (and plenty of boring ones, too!).

I’ve posted a few times about various stages of my project, which is now centered around the development of a prototype game-based engine for displaying and exploring virtual landscapes. It’s been a strange road for me to get to this point, but the more I work on this stuff, the more I find myself really feeling like I’m on to something. However, it hasn’t all been a bed of roses, as they say. I’ve had to teach myself a new programming language (C#), absorb tons of new concepts, like depth buffers, and view frustums, and particle systems, issues related to user interfaces and Human-Computer Interaction, and put all those technical bits together with concepts that are central to Geography and GIS. Now, I have to try to meld all that with EVEN MORE new ideas from the world of game design, and design in general.

Continue reading “Lost in the Virtual Fog – A diary of my triumphs and travails”

Media Vehicle – Virtual Reality Mecha Style

For those of you who have always wanted to pilot your own giant mecha battle robot, Japanese researchers have taken another step toward the dream with the development of the Media Vehicle (site is in Japanese), which is a pod-like personal VR chamber that envelopes the user in a spherical display with no access to outside stimuli, while leaving the legs free to move. Definitely not for claustrophobics, but it’s still a pretty amazing machine.

Video of the Media Vehicle in action

Via DVICE

Checking out ARSights for Google Earth models

Many of you may have already seen the press releases and various posts about ARSights, an augmented reality app that lets you look at Google Earth models on your desktop. ARSights is from the Italian company Inglobe, which has developed ARMedia as a platform for augmented reality functionality. Some of our former colleagues here were working on some AR projects, so we thought we’d give ARSights a quick whirl and see what’s it all about. ARSights requires a few things to work: a web cam, Google Earth’s browser plugin, one of ARSight’s 3D models from Google Earth (right now it doesn’t work with any model), a printed copy of the marker target, and the ARSights application. There are instructions on the ARSights site to get everything set up.
arsights
Once we got everything set up, we downloaded the Parthenon model and gave it a try. It seems to work pretty well, and you can look on your desktop webcam view and see the model as if it is on the physical target in front of you. You can spin the model and zoom in and out by manipulating the target marker.

ARSights is a nice app as an introduction to the concepts of augmented reality, and it’s pretty nifty. I can especially see it being useful in educational collaborative types of settings. It’s only one of a number of projects working on this type of technology, though, as we saw a very similar application during the Labs demos at Autodesk University back in December (We shot some video of that demo and will be posting it soon on VerySpatial TV).

60 TB Of Everquest 2 Data for Science

Sony released around 60 terrabytes of raw log data to a group of researchers for analysis.  Lots of different disciplines appear to have mined the data looking for interesting patterns.  The data spans four years and 40,000 players.  What strikes me as particularly noteworthy is none of them seem to be geographers even though some of what they’re talking about is part and parcel of geography.  It’s also interesting to note that some of the research shows the limitations of current social science research methods.  Simply put, the data is just too darn large.  It also showed some existing survey methods tended to under-report phenomena, such as the number of older female gamers.  All in all, I’d love to get my hands on some of that data.  However, it would be fun to do the same thing with World Of Warcraft, who have signigicantly more players and thus significantly more datapoints.

More VR CAVE demos – our XNA virtual landscape application

I have really slacked off on the postings on the blog while I work on my research stuff, but I’ve finally got some pictures of my XNA virtual world application up and running in the VR CAVE at WVU. We had to do some tweaking because XNA is DirectX-based, so it runs on a separate setup from the Conduit and doesn’t affect that configuration. The demo that you see in the photos is our Virtual Morgantown project, and we are slowing filling out the landscape by re-texturing all of our 350+ SketchUp models that were used in the 1st generation ArcScene project, and then exporting them to .FBX for use in the XNA application. So far, it’s running great, and we’ve already created several small scenes and even have weather particle systems running. Everyone’s favorite so far is the snowy Morgantown landscape!

Just a reminder that the CAVE utilizes stereo 3D, so the photos are a little blurry because they show the double images that are drawn to give the stereo effect.

XNA in CAVE

Closeup XNA in CAVE

Synthin’ with Photosynth

As I’m sure many of you remember, I have been a fan of Microsoft Research’s Photosynth since we saw the first tech previews back in July 2006. Today I finally got some time to sit down and try it out myself. After setting up my profile on the Photosynth site and downloading and installing Photosynth, I checked out the Photosynth Guide and headed out to Woodburn Circle, a focal point on the WVU downtown campus. I took about 200 photos and brought them into the Photosynth dialog to pare the collection down to about 190 photos. I clicked the magic button to start the synth, and about 40 minutes later (it was a pretty big collection!) my synth was done and uploaded. The viewer shows the aligned photos as well as the point cloud Photosynth generates (It can be difficult to see the cloud at some angles).

When I saw the finished synth, I have to say I was even more impressed than I was after seeing all the examples already out there. Check it out below and see what you think!