It has been almost a month since Google Play services 7.8 began rolling out to users, and as of yesterday, it is in wide release to everybody. A previous blog post by Google discussed the big new feature for developers would be the Nearby Messages API, but it turns out there are a couple of other additions worth checking out. In a new post on the Android Developers blog, Google announced a new Mobile Vision API with the ability to detect the presence, orientation, and some details of faces when they are in frame on an active camera. There is also a new API for identifying and reading barcodes of many different types. Finally, the Google Cloud Messaging API has been enhanced with priority options to better handle messages with different urgency and localization support so users can be shown notifications appropriate to them.

For a quick introduction to each of the new features, watch the 5 minute DevBytes episode with Magnus Hyttsten.

Mobile Vision

The Mobile Vision API is a new bundle of functions designed around analyzing and working with photos and video, mostly through the camera. Like many other pieces of the Play services, Mobile Vision exists as its own small-footprint SDK. It has been added to Play Services with two initial components: the Face API for identifying people in view of the camera, and the Barcode API.

Face API

Google has added the new Face API to capture details about the people that can be seen through the camera on a smartphone or tablet. Developers can get information about how many faces are present in the scene, where they are positioned, and their orientation. Additional details are available for each face, including the position of their eyes, nose, cheeks, corners and bottom of the mouth, ears and ear tips. There are additional methods for determining if a subject's eyes are open and if it looks like they are smiling.

Many of these specific capabilities might sound familiar because they were mentioned in an APK Teardown for Google's Camera app a couple of months ago.

Camera apps often use algorithms to detect these same facial landmarks to automatically take pictures when everybody in a scene is smiling at the same time or to pick the best targets for a photo based on size and location. Some apps may use this information to apply special effects like synthesizing make-up or even turning a subject into a zombie. Similar libraries exist in both freely as open source and as licensed options, but developers can now use Google's Face API as an alternative to integrating a 3rd-party solution.

Google is careful to note that this API is built for detection purposes, not for facial recognition. It does not uniquely identify subjects within view of the camera. In fact, it can only track a face as it moves within the frame of view, but if it falls out of view for even a moment, it will be considered a new subject. In other words, this shouldn't be a battleground subject for privacy concerns.

There are additional details and implementation instructions in this blog post.

Barcode API

The Mobile Vision API added another interesting capability for identifying content in the field of view, but this one is for barcodes. For many years, Google has advised developers to either call out to the ZXing Barcode Scanner app or implement the open source library it runs on. This meant showing an awkward change of context to users – possibly also asking them to install an unfamiliar app – or going to the trouble of adding and maintaining yet one more library. In either case, the experience for users and developers left something to be desired.

Google is now baking that functionality directly into Play services so developers can use a single API to access a wide range of different 1D and 2D barcode standards.

  • 1D barcodes: EAN-13, EAN-8, UPC-A, UPC-E, Code-39, Code-93, Code-128, ITF, Codabar
  • 2D barcodes: QR Code, Data Matrix, PDF-417

Play services has improved its algorithm over many similar libraries by adding the ability to detect and parse multiple barcodes simultaneously, even if they are in different formats and multiple orientations.

Google Cloud Messaging API

Priority

Some information should reach a user right away, like an instant message or smoke alerts from a Nest Protect. Then there are things like a notification that a package will be delivered to your house... in 36 hours (looking at you, UPS). Google is giving developers a little more control over how these notifications are handled. A new 'priority' parameter has been added that tells GCM if a message should be treated with some degree of urgency. Messages default to 'normal', which means that they can be delayed for a short time and processed in batches so as to optimize battery life. When messages are marked as 'high' priority, they are sent immediately and will wake a sleeping device when they arrive. Naturally, an excess of high priority notifications can be detrimental to a device's battery life, but it's worth it to get a critical message a few minutes earlier. More details are available here.

Localization

Google Cloud Messaging has picked up a new skill: creating notifications for different locales. Developers can now set values for body_loc_key, body_loc_args, title_loc_key, and title_loc_args to have GCM generate an appropriate notification when it arrives at a user's device based on its locale.

Nearby API

Finally, the Nearby API that we've heard so much about over the many, many months, is finally live and ready to start yapping to any devices close enough to hear it. Google announced Nearby a month ago, but an SDK was not to be distributed to developers until the rollout of Play services v7.8 could complete. Now that the time has come, developers should begin experimenting with this, and all of the other cool new APIs.

The SDK is now available through the SDK Manager, and documentation on the developer portal has been updated to reflect the new capabilities.

Source: Play Services blog post, Face API blog post