For nearly a quarter of Americans, family photos are among their most valuable possessions, outranking even birth certificates and Social Security cards. With Google Photo’s newly launched editing features and advanced geo-tagging system, it shouldn’t be too hard to preserve them indefinitely.
Google Photos was always a handy tool. It allows for unlimited storage for uncompressed photos, which is a huge boon for people who don’t mind slightly lower quality.
Their online editing tools left much to be desired, but they recently announced new editing tools specifically designed for the web version of Google Photos.
But that’s not all Google has had up their sleeves when it comes to photo technology development.
Google has also launched a new type of artificial intelligence, modeled on the brain’s visual cortex, to figure out where photos were taken based on the photo itself.
The project, called PlaNet, was developed by Tobais Weyand and James Phillbin, software engineers at Google, with developer Ilya Kostrikov.
The program was developed by feeding it a broad data set of 91 million images, along with the geolocation data from each image. The team then divided the globe into about 26,000 squares of differing sizes, based on how many photos from the data set were taken in that area.
Theoretically, the more photographed a given area is, the more accurate the system should be at giving it a pinpointed location.
The photos the team used to compile the system and train the AI are from all over the web, with the only filters being to remove porn and non-photos. Not all are scenic photos, either — some were taken indoors and focus on pets or products.
The system was tested against 10 well-traveled humans and used the site GeoGuesser to asking players to guess where a particular Google Street View was taken.
PlaNEt won 28 of the 50 rounds, and had a median localization error of 1131.7 km, compared to the human localization error of 2320.75 km.