When it comes to privacy, smart speakers tread on a fine line between serving commands or playing content and listening in on everything you're doing. However, engineers at Security Research Labs have been able to cross that line using a series of building blocks that comprise Amazon Alexa's Skills as well as Google Assistant's Actions, making us aware of the ways some malevolent developers can capture our data.

The biggest exploit a developer can take advantage of when programming their voice app is using the sequence "�. " (that's U+D801, period, space) to force the voice assistant to "say" an unintelligible character sequence that ultimately results in silence. That, in combination with regular tools, can allow an app to collect specific pieces of data or simply just eavesdrop on everything your speaker can hear — all while bypassing security reviews from Amazon and Google as the companies don't inspect updates for voice apps.

For one, a developer can create a properly functioning app, then update it so that the "app" itself is useless, but — after a period of silence — makes the assistant say that a software update is available and can only be accessed by the user saying their password aloud.

Or, the developer can trick the user into thinking that they've stopped the Alexa Skill, but program the "stop" request to issue silent responses that allow the app to continue recording what users are saying and even be able to log that information down by setting up trigger words such as "I," "you," "password," or any other word.

For Google Assistant Actions, developers can record what people say doesn't even require trigger words — as long as the speaker hears people say something — anything — in perpetuity so long as the room doesn't fall silent for 30 seconds.

All that data isn't necessarily being picked up by Amazon or Google, but is definitely being stored by the developer.

Users shouldn't have to be vigilant by turning on the privacy beep on their Google speaker, listening for password prompts or have to physically turn off their speakers' microphones every time they use a Skill or Action (though it's never been a bad idea to vigilant). Instead, Security Research Labs suggests that Amazon and Google police app updates for unreadable characters, silent responses, and suspicious intents that look to fish for datasets like passwords.

Google responded to the SRLabs report by sending Ars Technica the following statement:

All Actions on Google are required to follow our developer policies, and we prohibit and remove any Action that violates these policies. We have review processes to detect the type of behavior described in this report, and we removed the Actions that we found from these researchers. We are putting additional mechanisms in place to prevent these issues from occurring in the future.