Paolo Pellegrin / Magnum Photos
IN 2015, humanity put. Our faces, our streets, our friends – all online. Now, researchers are tapping into this vein of information, studying photos in bulk to give us fresh insights into our lives and our cities.
In Singapore, ais aiming to use these photos to get a handle on air pollution. This is a in south and east Asia where schools and factories shut down when pollutants reach dangerous levels. The World Health Organization estimates that one in eight deaths worldwide are caused by air pollution.
Built by a team at Nanyang Technological University, the AirTick app estimates air quality by analysing photos of city streets en masse.
The app examines photos from large social sharing sites, checking when and where each was taken and how the camera was oriented. It then matches photos with official air quality data. A machine learning algorithm uses the data to work out how to estimate the level of pollutants in the air based solely on how it appears in photographs.
The end goal is for ordinary people to get accurate real-time estimates of the air quality in their neighbourhood. Rather than using air pollution sensors, which are expensive and not widely owned, people can use the cameras on their smartphones, which they carry with them. The project will be presented this month at the AAAI Conference on Artificial Intelligence in Phoenix, Arizona.
“Scouring streets virtually makes it possible to do in a single month what would typically take three years“
Other kinds of photo hoards are also finding alternative uses. At Columbia University in New York City, urban scientists are usingto help them study the city, without leaving their office chairs.
The team is scouring the pictures to find infrastructure snags that heighten the risks of accidents involving pedestrians and cars – work that traditionally meant trawling the streets with clipboards, noting dodgy intersections and badly designed crossings. Project leadersays the virtual method is far faster and cheaper, capable of completing in a single month the amount of work that would typically take three years.
In their latest project, Rundle and his colleagues combed through 532 intersections in New York City, noting features like traffic lights and crossings. Combining the observations with a database of car-pedestrian collisions, they found that accidents were more likely at corners with billboards, bus stops and pedestrian signals, suggesting those parts of the city could use improvement.
This isn’t the only project involving Google Street View: Yahoo Labs in Barcelona used it to build a program that finds the most visually appealing route from A to B. Other projects need more specific photos. Computer scientists at Carnegie Mellon University in Pittsburgh built a program to assess road damage from crowdsourced dashcam images. In London, our photo brain is being used to figure out which streets encourage walking; in North Carolina, to get a sense of how different.
We’re seeing more and more of the world through digital eyes. But we’re only beginning to understand how to our use our new, digital sense.
This article appeared in print under the headline “Worth a thousand words”
More on these topics:
This entry passed through the Full-Text RSS service – if this is your content and you’re reading it on someone else’s site, please read the FAQ at fivefilters.org/content-only/faq.php#publishers.