Google Assistant does a lot of things. This invisible artificial intelligence residing (partly) inside our devices can answer all kinds of questions, control our homes, help us plan our day, play our favorite music, and, with the addition of features like What's on my screen and Google Lens, glean more from what we're looking at and provide contextual answers. What you may not be aware of, and something I recently discovered (though it isn't very new), is that Assistant can read your screen even when you don't explicitly ask it what's on your screen. That has the potential to be very handy, but also extremely creepy if you didn't know it was possible.

Reading your screen to answer some questions

Say you're talking to your friends on WhatsApp and they happen to mention the new Avengers trailer. You're interested, but don't know who's in the movie and would like to know that. Pull up Assistant and ask "who acts in this movie?" without specifying anything else, and it will know from your screen which one you're talking about. Another example is getting a message about a song from a new artist and asking "what's the latest single," again without any further details. Assistant knows.

In the examples below, you can see me ask "how many live there" after a message about Paris, and "what's the latest result" following a message about Real Madrid. In each case, Assistant knew exactly what I was talking about. This also works if someone suggests a restaurant's name for example and you ask "how far is it from here?" It's not foolproof though, and there are several instances where it didn't catch what I was asking it, but for a start, it is quite impressive.

Implicit and explicit permission

Before we start waving the invasion of privacy flag, I want to point out two things.

First is that in all cases, I'm not starting off with "what's on my screen," but I am still giving Assistant implicit permission to see what's there and answer accordingly. Words like "this," "it," "there," "that," seem to tell Assistant that I'm talking about something, and just like it can infer what they point to in continuous conversations, it can also do it when there's no previous command. It knows it should look for context somewhere and the things on my display are the ones that make the most sense.

Second is that Assistant doesn't have this freedom to read screens willy-nilly. The first time you ever ask it "what's on my screen," it tells you there's a setting and a permission involved. If you approve its request to "use text from screen," you're giving it the go-ahead to act the way we described above. You may have approved this a few months ago and forgotten about it, so double check it now, and keep in mind you can revoke the permission at any time should you not want it to behave this way.

Update: As pointed out by badzi0r, Pixel devices have a few more options in that screen, one of which is the ability to flash the display's edge when Assistant accesses info from the screen. It's a nice start, but it's not there on all devices.

Benefits and issues

There are a few things to discuss here, and my thoughts on them are just the same as anything related to Assistant: convenient, hidden, and what am I giving up in return?

Being able to ask Assistant about something you got in a message or you're reading about now, is super convenient. You don't need to copy and paste words to be specific in your question, so if a friend tells you about a new movie, song, sports result, restaurant, place, you can easily get exact information about them — and this is more targeted than a blank "what's on my screen."

However, as with many of Assistant's features, this one isn't shown off or explained anywhere. If I didn't see it in a demo during MWC, I would not have thought to try it. None of my Android Police colleagues knew about it (update: Ryne reminded me he covered something similar a while back), and we've been writing about Assistant for years.

The real issue though, beside discoverability, is that the permission setting tells you Assistant can read text from your screen, but doesn't specify that it can do it even when you don't ask it what's on your screen, or explain a) what triggers it to do that and b) what it does with the info it reads.

The first concern spins out many more questions. Is Assistant checking text from my screen every single time I open it while I'm in a third-party app? Or is it waiting for those implicit permission keywords I mentioned above and only then does it read what's on my screen? Can privacy-focused apps protect themselves from Assistant's eyes, and is that done through the same feature that disables screenshots and screengrabs in password and data-sensitive apps?

The second concern can be partially addressed. Checking my Assistant history over on myactivity.google.com, I see that there's no mention of where Paris was inferred from, for example, and there isn't a saved screenshot or text snippet to go with any of my questions. So my guess right now is the text is only read and used locally, with nothing being saved in Google's servers, but a confirmation of that would be welcome.

While we know this Assistant feature isn't new per se, it's not one we were familiar with and it's sparked a few questions and concerns for us. If you weren't aware about it either, now's a good time to see if you've chosen the right permission setting and change it if you don't like how it's being used.