GDD Europe (Google Developer Days) might have flown under your radar, which is understandable. GDD isn't Google I/O, and it often doesn't get as much attention. But, at the recent event in Kraków, Poland, Google showed off some cool new features for the Google Assistant and Google Lens, and they're worth checking out. 

If you have the time, I encourage you to watch yesterday's keynote. But, the simple version is that Google's been making a lot of headway in both the Assistant and Lens when it comes to context and natural language processing.

A few of the features which are shown off in the video are working now, but a lot of them don't seem to be working just yet. Or, at least, they didn't work when we tried them on our devices.

If you absolutely can't spare the time, we have a reasonably detailed summary of the relevant bits below.

Google Assistant

At a basic level, Google Assistant's performance has improved. From the keynote, it seems much faster to answer questions, and it's better able to leverage search to find answers. Speech recognition has also improved a lot, especially in noisy environments as is demonstrated at the end of the keynote. But the coolest parts are, of course, all the demonstrations of new Google Assistant features.

One interesting display is here, where an incredibly nebulous question about the name of a film is asked. Based on the improvements in natural language processing and a search for the elaborate description provided, the Assistant was able to pick out the title easily. I wasn't able to get the same question working on my phone, though.

Google Assistant already had stored preferences, but it also looks like that's been significantly expanded. In addition to existing features like setting Home as a referenced location, or a sports team assigned as your favorite, you can also store custom preferences related to things like the weather.

In one particularly poignant example, the presenter sets a temperature preference for swimming in Lake Zurich and then asks Google Assistant if he'll be able to swim this weekend, and it responds yes or no by referencing that stored preference. Furthermore, every time a relevant question is asked, it can reference that variable to determine an answer. This doesn't look like it's live yet, as I tried setting up a similar set of preferences and wasn't able to.

Outside of preferences, it's also able to pick up context based on recent searches. Questions asked successively can be recursive, so if you ask a seemingly nonsensical question such as "show me pictures of Thomas," the assistant is able to use previous questions as a frame of reference, which can change the results. In that particular example, which was shown off in the Keynote, the context of "Thomas" images changed from pictures of Thomas the Tank Engine to Thomas Müller of FC Bayern Munich, due to a previous request to see the team roster.

Contextual actions were present to some degree before in the Assistant, but they seem to have been significantly expanded now. Even one question elaborations such as "where" function as you expect. It's a much more intuitive and conversational approach to search.

Outside of context and questions, the most useful tool shown off might be the "be my translator" mode which we've never seen before. Once enabled, the Assistant translates statements given into a target language and says them out loud, making it easy to ask directions or get help in a foreign country. Unfortunately, like so many other things on this list, although it is shown off, it doesn't seem to be publicly available quite yet.

There are a ton of cool examples for the Google Assistant in the Keynote — which, again, is absolutely worth watching if you have the time — but all these changes come together incredibly well, and with a clear focus. Google wants to make it simpler to ask its Assistant questions and get relevant answers. And, by all appearances, it's done just that.

Google Lens

Google didn't have quite as much to show off when it came to Lens, but we also haven't heard much about it since I/O. To that end, we'll take what we can get. And, what they did demonstrate looks incredible.

I strongly urge everyone to just watch for themselves here, because Lens is almost magical. But, basically, they've demonstrated that lens is able to pull contextual information from an image for questions, too.

Asking it the caloric content of a pictured apple is certainly useful, but the best demonstration might be the currency conversion, in which the presenter asked how many Swiss Francs were in a pile of Polish Zloty. In just seconds it was able to identify the currency photographed, the quantity present, parse the question, run the conversion, and read the answer.

Although we heard a lot about Lens at I/O, since then things have been silent (outside of a few icon changes we found in a recent teardown). With these new features being demonstrated live, hopefully, consumers will be able to play with both Lens and these new features in the Google Assistant soon.

Yesterday was the last day of GDD Europe '17, so there probably won't be much more to add from that event. I would bet that this is all leading up to a big series of announcements this fall. Either way, we'll just have to wait and see.

Source: YouTube