If you haven’t seen the demo of Microsoft Research’s Street Slide, it’s a pretty cool addition to Bing Streetside that is not available yet, but will be presented at SIGGRAPH 2010. While Google Streetview and Bing Streetside allow you to see photo representations of an area as you navigate through it, you’re basically limited to the perspective from your position on the centerline of the roadway as you look left or right. What Street Silde allows you to do is zoom out and take a side scrolling type of look at the whole side of the street moving side to side and panning over the streetscape. It looks like you can also get a panoramic view as well. If you want to see Street Slide in action, check out this video:
While I continue my research in the area of immersive virtual worlds and serious gaming, I have also been doing a lot of work recently in the area of digital cities, and trying to implement the idea of a “smart” 3D city or town landscape that can be used as a visualization and collaboration tool for municipal management and planning. Once again, an IBM project has grabbed my attention by combining serious gaming and digital cities into one cool project: CityOne. Today I saw a press release announcing the CityOne project, a SimCity-like game using real world data that will be designed to bring together players of the game to help work through and solve real problems facing cities around the world. IBM will be introducing CityOne at Agility@Work Zone at Impact2010 in Las Vegas this week (this is the IBM software conference for business and IT, not to be confused with another conference called IMPACT 2010, which some blogs and sites have linked to by mistake)
Check out the CityOne preview:
While editing the podcast today I realized I did a horrible job of describing Microsoft’s new features. In my defense, I was talking about it as I received an error in 2 different browsers as I tried to get to Bing Maps and then trying to decide whether or not to bail on that news item. The upshot, I should have bailed, but here is a bit of what I probably would have said
I am happy with the updates that have made their way into both Bing Maps and Google Maps over the last few days, but the thing I am truly excited about is Microsoft’s integration of Photosynthish 3D surfaces that show up when a Streetside scene moves. Pick a place like the corner of Wabash and Monroe in Chicago where there is a portion of the El and an urban canyon effect. As you move along east to west along Monroe under the El and toward State you see the impact of texturing the images to the 3D models. It takes what is a great tool for getting to know an area from informational to immersive. This will not hold true in rural areas, but the difference it makes in downtown Charlotte and Chicago (the only two areas I have looked at in Streetside) is significant. It is good to see some of the news we talked about back in 2007 making such an impact today.
The difference between the Google and Bing maps continues to grow. The underlying features aren’t radically different, but the difference in feel between the two is notable. Bing continues to push toward a professional set of tools from my perspective, not something that you use to place a map on you personal webpage, but a set of tools to encourage companies to embed and advertise through Microsoft’s tools.
I can’t pass up a chance to post a cool interactive 3D visualization, like the NY Times map of the Vancouver Olympic venues. They’re using Intermap’s elevation data, and imagery by Digital Globe, Province of British Columbia and TerraMetrics via Google Earth. The 3D visualization starts with an overview of the Olympic venues, including Vancouver and the surrounding area, and then lets you zoom in for a look at specific venues and features embedded photos from each location. Winter Olympic competitions start on February 13th, and it would be nice if they could add to the photo collections with shots from the actual events and medal ceremonies.
Before there was Avatar and even before Fisher-Price Viewmaster, there was stereoscopy or stereo photographs that presented scenes in life-like three dimensions similar to a Viewmaster. A recent book on one set of Stereoscopic photos of 1850’s village life titled “A Village Lost and Found”. It is a picture book that evokes the Victorian times of a specific village through a series of 3-D images meticously gathered over a lifetime of research. But one of the most fascinating aspects of the work is its relevance to geospatial and social networking technologies today. The authors, Brian May and spent years searching to determine if the village was a composite of multiple villages or a specific location, but it wasn’t until 2003 that they asked for help through the Interent community and someone responded with a, “Well, I live there” that it was solved. How many other geographical mysteries big and small have been solved or are waiting to be solved by the world’s increased connectivity?
As many of you know, we have been involved in research related to geovisualization, 3D immersive virtual environments, and 3D virtual worlds for quite a while, and we’ve talked about some of the issues and opportunities a number of times on the podcast. One of the biggest obstacles to a compelling VR environment is getting a true sense of physical immersion, and a project called Virtualization Gate from INRIA and Grenoble University in France has developed a full-body immersive system that places the user within a 3D virtual world that can be interacted with. The video below, created for SIGGRAPH 2009, shows the system in action, but also shows just how difficult it is to set up a system like this with current technology. Still, this a pretty cool project, and I wish our CAVE setup at WVU could do this:
Via Singularity Hub
I have been remiss in not doing any diary entries for awhile, but I have been feverishly working trying to get my demo XNA application ready for the ESRI UC presentation. Finally today, I think I got the last bit of functionality on my list working, so I am pretty excited and crossing my fingers that everything will run right at the conference, Of course, you can never predict a live demo, so tonight I am recording a few videos of Virtual Morgantown in action, using a cool little software tool called GameCam.
Last month, we were happy to be able to go to Pittsburgh to cover the Game Education Summit, held at Carnegie Mellon’s Entertainment Technology Center. We got some nice interviews, including the conversation with ETC Pittsburgh Director Drew Davidson, which we featured on Episode 206 of the podcast. While I’m going to have another entry soon that will be specifically about some of my thoughts on the Game Education Summit, since I’ve gotten back, I’ve had to literally burn the midnight oil to get Virtual Morgantown looking and running the way I want it for this stage in the project. As I’ve been sitting here opening each model in SketchUp, cleaning up what I can, and exporting them to the XNA application as .X files (many, many thanks for Zbyl’s .X Exporter plugin!), I am continually reminded of the challenges in working at this scale after coming from a GIS background.
As you can zoom in and essentially immerse yourself at a nearly 1:1 scale in the virtual world, issues that never would have mattered suddenly become vital. Even Google Earth, World Wind, ArcGIS Explorer, any of the virtual globes aren’t really meant to be used at that scale, as their background imagery and 3D models look best from a viewpoint well above ground level. So, when you get down to the ground, and are actually representing real features, you have to give each one at least some individual attention. It’s often the opposite of the way most of us are trained. Rather than looking for commonalities and creating data layers that characterize those similarities, you have to bring out the aspects that might make a particular feature unique.
It’s a strange notion for a lot of geographers and GIS people, I think, to change their perspective from starting with a zoomed-out view of the world and then drilling down toward individual features to starting with the viewpoint of a single person in the world and then have to move and explore in order to identify and understand the nature of the virtual environment you’re immersed in. And, the more you’re drawn into the virtual world, the more obvious the individual differences become, and the more important it is for the creator of the simulation or interactive environment to pay attention to those small design details that help form a sense of actually experiencing the virtual world.
As I have progressed from childlike wonder and delight over my ability to create a simple XNA application with real-world terrain data, to relief when each one of my new functions actually builds and runs or when I get my model assets adjusted to just the right location and height, I am becoming even more of a believer in taking gaming technology and design seriously, and looking at how we can create virtual world applications that integrate aspects from many different areas, from gaming to GIS and geospatial to geography, and even history and other disciplines.
Over the years we have seen many geovisualization technologies emerge, each with their own ups and downs (pun, sadly, intended). All of the approaches, however, can be broken down to two styles: 1) flythrough visualizations, where the creator has setup a prescribed flight path that the viewer can not escape (often distributed as a video), and 2) interactive visualizations, where the viewer has control of how and where they approach the visualzation they have been given.
My dissertation proposal defense is finally over and, since I passed, it’s time to get back to work and really get the functionality on the Spatial Experience Engine ramped up. During my presentation and in the Q-and-A with my committee afterward, I kept coming back to the issue of interaction in the virtual world. I’ve been ruminating on this quite a bit over the last few months, once I got the basic terrain and model drawing stuff out of the way. I think the key to making all of this work and be compelling for users is of course the UI (User Interface).
This cool video shows how programmer Shamus Young created a procedurally-generated 3D nighttime cityscape. The program generates everything every time it runs, and doesn’t use any pre-stored textures or art assets. He gives a great step-by-step explanation of how he did the project in a series of blog posts.