Today Google has announced the release of MobileNets, a series of TensorFlow vision models built for comparatively low-power, low-speed platforms like mobile devices. In a cross-post on both the Open Source and Research blogs, Google released details about the new visual recognition software. Now even more useful machine learning tools can operate natively on your phone’s hardware, in a fast and accurate way. And, future tools like Google Lens will be able to perform more functions locally, without as much need for mobile data, and without waiting. Read More
Google Goggles has been basically dead since 2014. It had no updates for three years, and what little utility it brought was quickly replaced by other services, or merged into existing apps. Well, now Google has disclosed a worthy successor to the idea in the form of Google Lens. Just announced at I/O, the new system will provide contextual information about things visually, like flowers you take pictures of, or text you point your phone at. This is huge. Read More
Unless you've been living under a rock, you are probably aware of the recent improvements and updates to the Google+ experience, both on the web and in mobile apps. While Auto Awesome, Auto Enhance, Auto Highlight, Auto Backup, and other widely discussed features are certainly exciting, one subtle nicety managed to fly under our radar until a post by Google's +Tor Norbye pointed out just how awesome it is.
The feature I'm talking about is visual recognition in Google+ photo search.
Remember when +Vic Gundotra mentioned during the I/O 2013 keynote that Google+ will now attempt to guess what you're talking about and auto-tag posts based on, among other things, attached pictures? Read More