The BBC is reporting that NASA has put out a Request For Information for any groups or agencies who might want o create a NASA MMORPG. The idea is to create a virtual world that simulates science experiments that students can explore. Hopefully whoever gets to make the final products will include a significant geospatial portion. Look for Very Spatial to follow this fairly closely in the coming months, as it dovetails nicely in Jesse, Sue, and I’s research interests!
So I have been following the bits and pieces on Engadget, Gizmodo and CrunchGear and have cobbled together my highlights from today.
GPS: – toys toys everywhere
3D Monitors – Active stereo is expensive and kind of a pain sometimes, so I hope that some the passive 3D technologies announced over the last few years will eventually make a mark in the market.
Microsoft Keynote – two of Bill’s big things (out of 3) are location and touch…now if I could just find the quote to prove it
Awhile back, we talked about Sony’s upcoming 3D virtual world/social network for the Playstation 3, dubbed Home. Right now, Home is in a closed beta trial, and is scheduled for release in early 2008. GameTrailers.com and Veoh recently posted a 25-minute walkthrough of the beta of Home, showing some of each features and 3D environments. The look of Home is impressive, and the realistic 3D graphics and embedded multimedia and interactive capabilities are pretty cool. Unlike Second Life, Home’s basic environment is pre-designed to look like a typical real world environment, although users can customize their avatar’s look and get their own apartment to customize as well. Home’s main focus is as a 3d online social network, but it demonstrates again how advanced 3D graphics applications are converging with the idea of creating virtual worlds that are based on real-world data and features.
We had a great interview with Chris Hanson of 3D Nature this week, but we aren’t going to post it until after all of the holiday travel and our year in review episode. In the interview Chris pointed out the great sale they are running on the GIS friendly Visual Nature Studio 2. I have been using their World Construction Set, and more recently VNS, since the late 1990s and am a huge fan. I actually received my first literal ooh and ahhs during a presentation where I was showing some reconstructions of prehistoric landscapes made with WCS. Anyway…great software at a great price for the holidays.
Google has added a few new features to its Google Maps application (funny how Google Local never really caught on, isn’t it?). One, they’ve added the ability to edit markers, which we mentioned in the past. Two, they’ve added a new “street view” function for some urban areas that’s worth a look see. I checked out Pittsburgh, which I know best, and it was pretty neat to see 360 degrees around from a point. Finally and possibly nearest to my day job, they’ve added a “terrain view” button on their maps. Click it and you’ll see hillshade terrain for an area. We zoomed in to West Virginia and took a look around. The data there is remarkably similar to the 3m DEM product that was done in West Virginia last year and is available for free download. If it is, I can’t help but notice there’s no credit for WV… Still, even with that nitpick, the addition of terrain and street view is pretty cool, even leading at least one commentator to wonder if Google Earth is on it’s way out!
So, now that our conference presentation is over, I thought I would tell a little bit about what I have been working on for the last few months that has eaten into my blogging time, among other things. After we finished our prototype virtual world last spring, recreating historical Morgantown, WV in ArcScene using SketchUp models and GIS data layers, we realized that we were about at the max of the functionality we could leverage. I started looking at other ways to combine 3D reconstructions with embedded data delivery, and read about Microsoft’s free game development environment, XNA Game Studio Express. Now, when I say free, there are some caveats, because you have to have a computer that’s capable of running some pretty powerful graphics and there are some restrictions on use. You also do need some coding experience to get up and running quickly, but you don’t need to be a full-time developer to pick up the necessary skills. Finished games can be run on Windows or on the Xbox 360 console, and using the game framework opens up all kinds of possibilities for amazing graphics and interactive functionality.
For those of you not subscribed to the V1 Magazine Newsletter, Jeff and Matt give their take on a topic we have often talked about, gaming technologies in the realm of geospatial. Check it out and let us know what you think about the subject.
Nokia is taking an interesting twist with their acquisition of Navteq – they want to focus on pedestrians. At a time when people are working hand over fist to get money into in-car navigation systems, Nokia apparently sees a hole in the market. Clearly Nokia has a delivery mechanism at hand for this as well. The other interesting twist is that Nokia is expecting it’s customers to help keep the maps up to date. It’s sort of the ultimate consumer level participatory GIS! It will be interesting to follow this to see if Nokia is ultimately successful.
Over on the Google Earth educators site there is a GE tour of Asia in honor of the 2007 Geography Awareness Week topic. If you get a few minutes free you should head over and download the KMZ and take the virtual tour.
A couple of projects by Japanese researchers show that work in the area of augmented or mixed reality is really pushing the boundary between the real and virtual worlds. Michihiko Shoji, a researcher at the Yokohama Nationa University Venture Business Laboratory, has developed a virtual humanoid called U-Tsu-Shi-O-Mi. The robot is covered in a green cloth skin and a head-mounted VR system virtually maps a human avatar onto the robot. The user wearing the display can then interact physically with the robot to add a sense of touch to their virtual interactions with the avatar.
The second project relates to the very specialized field of brain-computer interface (BCI), which relates to technologies that are able to allow humans to mentally control computers. Researcher at the Keio University Biomedical Engineering Laboratory have actually developed a system where a user wearing a headpiece that monitors brain activity in certain areas can actually cause an on-screen avatar to move in Second Life. The research is in the early stages, but the lab has a video online that shows their project in action (the page is in Japanese, but if you look above the photos, you’ll see the video links to either Windows or Macintosh versions).
I am sure all kinds of Matrix and Minority Report analogies come to mind, but it’s just unbelievable sometimes how fast research is moving in these areas, certainly faster than our ability to deal with a lot issues related to the use of these kinds of technologies in the future.
Via Pink Tentacle