AVSP: Live! at the AAG 2009 in Las Vegas

In just three weeks Geographers from around the world will be descending on Las Vegas for the annual Association of American Geographers. We will be attending and presenting at the conference (on Wednesday) and we will also be recording an episode of A VerySpatial Podcast live on Sunday afternoon, March 22 at 3:10pm. Everyone is invited to attend the live show in North Hall N110 of the Las Vegas Convention Center.

Sub-glacial antarctic mountains mapped

BBC news has an interesting article on the mapping of the Gamburtsevs which lie under the ice in Antarctica. The article describes the use of radar, magnetic, and sonic/seismic remote sensing methods by a group of scientists, engineers, pilots and support staff from the UK, the US, Germany, Australia, China and Japan. Definitely an interesting read. Head over to check it out.

BBC NEWS | Science & Environment | ‘Ghost peaks’ mapped under ice.

VisiWiki

VisiWiki or the Visual Wiki is like visiting inside someone’s head. It almost provides a cognitive map of specific topics such as geography. I looked up Guns,Germs, and Steel and it was like wandering Wiki, You Tube, Google Maps, Flikr, and Yahoo without all the browsing time. The topic Geographic Information Systems of course brings up among other things, John Snow’s map of the cholera outbreak. It’s fun to explore and you can imagine what it will be like as it becomes more robust.

Gesture-based input for 3D

Engadget has a post on iPoint 3D brings gesture-based inputs to 3D displays with an example pic straight of an Autodesk demo. The pic aside I expect the technology they are discussing is fairly similar to what Microsoft uses for the surface (without having to actually touch the panel) and has been shown off a few times by CompSci students working on class projects with a simple web cam. This isn’t to downplay the potential of these interfaces, on the contrary I want to know when they will be stable enough to include in the OS (with Windows and Mac releasing touch in their next releases). These gesture based interfaces aren’t for every application, but tie it to facial recognition and you can have a multiuser interface where it can track each users ‘settings’. Link it to a 3D representation of a landscape and imagine being able to SuperMan your way through the scene with the traditional one hand flying pose. What if we could tie in sign language gestures to the interface to take advantage of an existing standard and teach the rest of us a few words at the same time.

So come on iPoint and others, let’s see what happens when we step away from the monitor with our gestures!

Poll reminder

Just a quick reminder that we have a poll going on over at VerySpatial.com. The question is “Which upcoming conference would you most like to attend?” with the possible answers of ASPRS, AAG, ESRI Dev Summit, AGILE, and Where 2.0. The Dev Summit is currently in 2nd place with Where 2.0 in the lead. If you are heading to Where 2.0 this year be sure to use our coupon code whr09vsp to save an additional 10% off the early registration fee.

Media Vehicle – Virtual Reality Mecha Style

For those of you who have always wanted to pilot your own giant mecha battle robot, Japanese researchers have taken another step toward the dream with the development of the Media Vehicle (site is in Japanese), which is a pod-like personal VR chamber that envelopes the user in a spherical display with no access to outside stimuli, while leaving the legs free to move. Definitely not for claustrophobics, but it’s still a pretty amazing machine.

Video of the Media Vehicle in action

Via DVICE

LA Times mapping Los Angeles neighborhoods

As part of the Data Desk section of the L.A. Times website, the paper is unveiling a project to map the neighborhoods of Los Angeles, California. As described in an article discussing the mapping project, the purpose is to create a map that reporters can use as a reference for consistent information on the naming of L.A.’s many neighborhoods and landmarks. However, the paper’s attempts to draw lines and define boundaries for these local areas is adding controversy to the project, as numerous questions and comments about how and where neighborhoods are being demarcated are being raised by L.A. residents. That input from the communities, however, is exactly what the LA Times is looking for: “Los Angeles is a city that remakes itself constantly, so drawing boundaries for communities can be perilous. City officials are happy to designate community names, but have never been willing to set borders. But we at The Times are preparing to do just that, and we’d like to invite your help.”

The project actually involves quite a bit of mapping and database work. The base map began with US Census tracts as the initial boundaries, and then began adjusting the tract boundaries to reflect their information on neighborhood boundaries. Population data associated with the census tracts was also readjusted. The interactive map that has been made available was built with free and open source software including OpenLayers, Django, and PostgreSQL.

Checking out ARSights for Google Earth models

Many of you may have already seen the press releases and various posts about ARSights, an augmented reality app that lets you look at Google Earth models on your desktop. ARSights is from the Italian company Inglobe, which has developed ARMedia as a platform for augmented reality functionality. Some of our former colleagues here were working on some AR projects, so we thought we’d give ARSights a quick whirl and see what’s it all about. ARSights requires a few things to work: a web cam, Google Earth’s browser plugin, one of ARSight’s 3D models from Google Earth (right now it doesn’t work with any model), a printed copy of the marker target, and the ARSights application. There are instructions on the ARSights site to get everything set up.
arsights
Once we got everything set up, we downloaded the Parthenon model and gave it a try. It seems to work pretty well, and you can look on your desktop webcam view and see the model as if it is on the physical target in front of you. You can spin the model and zoom in and out by manipulating the target marker.

ARSights is a nice app as an introduction to the concepts of augmented reality, and it’s pretty nifty. I can especially see it being useful in educational collaborative types of settings. It’s only one of a number of projects working on this type of technology, though, as we saw a very similar application during the Labs demos at Autodesk University back in December (We shot some video of that demo and will be posting it soon on VerySpatial TV).