When the Street View component of Google Maps debuted, I was amazed. More than anything else, I was shocked at what must be the extraordinary expense and effort of gathering and stitching together that much photographic data.
I was not optimistic about the prospects that the project could be maintained. Could even a corporation as well-heeled as Google afford the upkeep on a fleet of cars equipped with 360° cameras and hard drives, sending them out frequently enough to keep the imagery from falling terribly behind the times—and then give it away for free?
This week, a new component of Google Maps debuted. Along the top of the Google Maps window is a row of buttons offering alternate views: “Traffic”, “Satellite”, “Terrain”, and so on. A new button, “More”, appeared a couple of days ago. Roll over the button and you have the option of adding Wikipedia entries or geotagged photos.
The addition of photography is the most interesting to me, because I expect the availability of geotagged photos to balloon in the years to come. With the boom in sites that allow photos to be indexed and searched by location, how long before we see cameras with built-in GPS that automatically adds latitude and longitude in the photo file metadata?
And with the explosion in the availability of these photos, how hard would it be to use a Photosynth-like program to stitch them into existing Street View photos, updating the view whenever someone, anywhere, takes a picture and uploads it?
Eventually, I expect that products like Street View will morph into a fully immersive 3D model of the world, with heavily-trafficked areas of the real world being updated in something close to real time. Wouldn’t it be great if all the security cameras in the typical urban space could feed into that model? We would be just as watched-over, and could even participate in the watching.
Thursday, May 15, 2008
One more step on the road to the mirror world
Labels:
crowdsourcing,
Google Maps,
metaverse,
mirror worlds,
Photosynth
Subscribe to:
Post Comments (Atom)
2 comments:
Cool, huh?
The biggest limitation right now on expanding it further is the need for people to stitch together the photos, since computers are notoriously bad at image recognition.
Google and others are working on polled-response programs to build databases that will improve image software, so that computers can learn to disregard that street shot blocked by the huge transfer trailer. The BBC ran a story on it recently.
Once computers are up to speed on processing all of this data, you're really going to see the 3-D environment quality improve, and maybe even see your 'virtual world.'
Post a Comment