by Sean Hollister February 7, 2017 12:44 PM PST @starfire2258
This, my friends, this glorious TV and movie trope, may be coming true. The ability to "zoom and enhance" an image, one that's far too low-res for humans to understand, is now way, way closer thanks to a team of AI researchers at Google.
At left, the crummy low-res image the computer had to work with. At right, the actual photo. And in the center... a human that bears a passing resemblance to the real photo!
So, how did it do that? Google's Brain research team trained a pair of neural networks to do it all by themselves, by feeding them images of celebrity faces (and for a later test, bedrooms).
One network was responsible for figuring out how the pixels might turn into a higher-res image, while the second added fine details, each network working with what it "knew" about celebrities (or bedrooms) having analyzed lots of similar pictures.
No, it's not remotely close enough to identify an exact person -- and remember, the computer is imagining the details, not magically extracting them. Still, it could be another tool (like a police artist's sketch) to help detectives ID their suspect. Perhaps it could help agencies get more value out of satellite images, too.
And it looks like it could work with source material that's far lower-res than the Boston Marathon Bombing footage from 2013 -- where Carnegie Mellon University's CyLab demonstrated a similar "super resolution" technique.
Unfortunately, a Google rep tells us this was a "one-off research exploration," and has no current plans to use it.
It's also probably worth noting that Google's computers knew that they were looking at faces (or bedrooms) to begin with.
You can read much, much more about precisely how the Google technique works (and how it fooled 11 percent of humans, which Google's researchers claim is a remarkably high number) in the PDF document below.
(arXiv.org (PDF) via Engadget)