A while ago, we posted about information we'd received indicating that sometime soon, Google's search functionality (and other actions) would be expanding beyond the Search app, moving into other apps for device-wide search interaction and - eventually - app-specific functionality.

It appears that isn't the only Search trick Google is working on, though. According to the information available to us, Google is working on functionality for now known as KITT (get it?) or "Android Eyes Free" internally. This functionality would allow users to interact with Search without touching or even looking at their device, getting just what they want without any distraction.

Before we get started, I'll answer the question on everyone's mind - yes, it looks like Google is poised to open up "always-on" listening for the "Ok, Google" hotword. For now, it seems that detection while the display is turned off would require the device to be charging, though detection in apps or on the home screen would work any time. We'll discuss this further in a moment.

Of course, anything can happen, and we don't have any indication of when this functionality may appear to the public. All of that said, there's much more to talk about, so let's get started.

[leak_disclaimer]

Confidence Level

For this rumor, we're going to go with a 9/10 confidence level. The functionality seen in the information available to us only seems like a natural progression of Search, and the information we have seen is recent enough to believe that this is still in active development, though - as is often the case - we can't really assume when this functionality will be publicly introduced, or if the interface will appear as it is now.

The Rumor

This rumor, though it seems simple, has a few important facets to consider. Essentially, Google wants to build in the kind of functionality seen in the Moto X to its own Search app. This means a special focus on Search at times that you can't or shouldn't be looking at your device for extended periods, or when you can't type on your device to interact with Search. The main objectives appear to be as follows: enable users to activate Search with minimal work from anywhere, provide an eyes-free interface for times when users shouldn't be looking at their device, and return results that don't require users to look at their device. We'll break these down in order.

First, Google wants to allow Search to be activated from anywhere (using only your voice), including your home screen or apps, or even when the device is turned off so long as it's charging. It's unclear right now whether devices with hardware capable of always-on listening will suffer this limitation, though the requirement may be a stop-gap while battery use is optimized overall. In other words capable hardware may suffer this limitation when the functionality is first released, but would likely gain always-on listening without the need for charging later, once battery usage is sorted.

Next is the car - outside of Gearhead, Google wants to provide a minimal, sparse interface for carrying out searches and other actions while driving, biking, or doing other things that require your concentration, in order to make things super easy while also avoiding any distraction. Users can choose to use Bluetooth devices or headsets to activate Search, but can also wave their hand over the device to initiate an interaction. The functionality would also read notifications aloud.

So, what about the times when your query returns only web results, and not a spoken or immediately visual response? Google has thought of that. First, Google will speak more detailed answers to you in the car for results that already include voice feedback. Instead of simply saying "here's the weather in [location]," it will read out the card. For results that only include web links, Google is exploring options for "keeping" the results for later, or suggesting the user exit eyes-free mode when it's safe to do so to view the results. This is an ongoing exploration, and Google is apparently still figuring out how to negotiate the sparse interface with queries for navigation, etc. where the screen would have to return to full interface.

For those instances where Google is acting on your behalf, the same sort of confidence-based delineation is drawn in the UX. Based on Google's confidence, it will either take implicit confirmation, meaning Google acts on its own (goes ahead and sends a message) after a short time unless the user hits a button or stops it manually, or explicit confirmation (when Google asks "do you want to send it?") where the user must reply or otherwise act to complete the action. Google seems to be taking great care in deciding which actions should use each method of confirmation, because every extra ounce of attention a user must pay to the interface adds more cognitive load to something other than driving, which can be unsafe.

The Evidence

From what we've seen, the way Google is accomplishing eyes-free search in the car is really smart. For this functionality, known internally as KITT, the Search interface is represented (possibly using Android's daydream feature) with a black screen accented only by the now signature blue and red circular interface elements indicating when Google is listening vs when it's talking, as well as extremely simple iconography and sparse text to indicate when a notification is being read aloud. Before we take a look, I'll note (as always) that these images are our own recreation of the interfaces we'll discuss, based on our information.

What is interesting about the "turn taking" behavior when Google needs more information to complete your task, is that the interface is even more sparse than what we saw in our post about Gearhead - the screen only shows the blue circle and a very minimal progress indicator at the bottom, to show how many more turns you may need to take before Google has all the information it needs to, for example, send a message.

Our information also outlines what settings would be available to users, giving more detailed clues to functionality, and a nice look at the opt-in nature of the functionality. Most features can be switched on or off independent of other hands-free features.

Speaking of opting in, Google evidently plans to provide a very friendly onboarding process, with a Google Now card indicating the functionality, and a simple set of screens describing how it works. Here's what that will probably look like, according to our information. Of course, first-run flows are an area subject to great change, as Google finalizes features and decides which points are important to point out to the user.

Final Thoughts

It's easiest to think of this as Google bringing functionality from the Moto X into the Google experience, adding its own UI style and a few extra tweaks to make the functionality its own. We can't be sure when this will be implemented, or if it will look exactly like what we've reviewed in this post, but it's clear to us that Google is working hard on a smart and useful way to keep your eyes (and fingers) off your phone when it's convenient.