One of the more exciting announcements from Google I/O 2017 was Google Lens, an upcoming feature for Assistant. The general idea is that Assistant would look at your phone's camera feed, and try to pull context information from it. Imagine Google Goggles, but using the company's incredible machine learning prowess.While we're still waiting for the feature to go live in Assistant, we noticed in a teardown that it was being added to Google Photos. So in addition to using a straight feed from your camera, you could use Lens with any picture you've already taken as well. XDA Developers discovered that the Google Lens intent was already live, and could be activated through the use of ADB or Tasker. So with a little bit of work, it's possible to try out Lens with photos stored locally. And I have to say, I'm pretty impressed.

How to test out Google Lens

Shortly after this post went live, Lens completely stopped working (no info appears below pictures). So following these instructions won't actually get you anywhere.

While you can activate it using Tasker with the Content Provider Helper, I think it's much easier to do it through ADB. If you don't already have ADB installed, you can follow our instructions from this post. Once you get your phone connected (and get Photos v3.5 installed), just transfer a picture to your phone's storage with this command:

adb push /path/to/picture/on/your/computer /sdcard/test.jpg

If you don't want to type out the full path to a picture on your computer, there's a much easier way (at least on Windows and Mac, I haven't tested this on Linux). Type in "adb push " (without the quotes, but including the space at the end), then drag the picture into the command line window. This should paste the file's full path, so you don't have to type it. Then type out the rest of the command and hit Enter.

Once you have a picture at /sdcard/test.jpg, type in this command to launch Google Lens with the local picture selected:

adb shell am start -n "com.google.android.apps.photos/.lens.oem.LensActivity" -a "com.android.camera.action.LENS" -t "images/*" -d "file:///sdcard/test.jpg"

First impressions

While this hack doesn't make Lens easy to use, it does give us an early glimpse at what it's capable of. At Google I/O, the company showed off Lens identifying a flower and picking up login information for a router from the unit's sticker. Needless to say, I was eager to see if Lens could deliver.

The first set of images I tried were books and movies. Being able to find information about products from the cover/label is pretty basic functionality these days - even Google Goggles could do well in this category. My test subjects were The New Essential Guide to Droids, 2001: A Space Odyssey on Blu-ray, Terminator 2 on Blu-ray, and Mysteries of History.

As you can see in the screenshots, it worked with all but 2001: A Space Odyssey. I'm not sure why, but perhaps the offset angle threw it off. Still, 4/5 is a good score. Next I tried scanning an invitation, which almost worked perfectly; instead of '10:30am,' it came out with '1:30pm.'

According to the splash screen, Google Lens is also capable of determining landmarks. But I figured pulling down pictures from Google Images would be too easy (it would likely already have them in the reverse Image search database), so I used some of my pictures from a trip to Disney World a few years ago. To make sure Lens wouldn't cheat, I removed the location data from each picture beforehand.

Google Lens nailed the first two - Spaceship Earth and Cinderella Castle. Next I tried a picture of Space Mountain, with the 'mountain' itself in the background and the sign visible in the bottom left corner. Instead of the ride, Google Lens only displayed information about the park it's in. The last picture was the exterior of Animal Kingdom's Dinosaur ride, with the sign somewhat visible behind the large statue, and Google Lens guessed correctly.

Next, I wanted to replicate one of Google's demos from I/O. As you may know, routers usually have the default login information written on a sticker attached to the unit. At the event, Google showed Lens scanning a router's sticker, and pasting the login info into Google Keep. I tried the same thing with my router, but Google Lens only picked up the URL of the manufacturer's legal page.

Note: The sticker is only blurred in the screenshot, not the actual image that Lens scanned.

My last test was on random objects around my room. The first picture is my LCD writing tablet (here it is on Amazon, it's pretty sweet), the next is a shelf with an iPod Nano and Apple 'Puck' USB mouse, and the last is of my Google Home. Lens couldn't figure out any of them - I thought at least the Home would be detected.

Conclusion

It's important to remember that Google Lens isn't even finished yet. While it seems close to completion, Google could definitely improve its skills between now and the official release. But even in this form, I came away mostly impressed. It can create calendar events from physical invitations, easily detect landmarks, and pull up Search results for various products/media.

However, I was a little disappointed that the features Google showed off - like advanced object detection and Keep integration - didn't seem to be working. But as with all tech powered by machine learning, it will only get faster and more accurate as time goes on. I look forward to seeing Lens' final form, whenever that may be.