When Google acquired British artificial intelligence startup DeepMind back in 2014 nobody was really sure why. It was a secretive company without any consumer-facing products, but it clearly had some technology of interest in Mountain View. DeepMind is now part of the wider Alphabet structure, and now a little bit more is known about what goes on there, including the work it does for Google.

One of the latest projects to come out of DeepMind is called WaveNet, a deep neural network for generating more natural artificial speech recordings, and its use case at Google is obvious. Google Assistant should sound much more human now that WaveNet has been incorporated into it. You can listen to a comparison below.

Before, without WaveNet...

[audio wav="https://www.androidpolice.com/wp-content/uploads/2017/10/nexus2cee_Hol_before.wav"][/audio]

Now, with WaveNet...

[audio wav="https://www.androidpolice.com/wp-content/uploads/2017/10/nexus2cee_Hol_After.wav"][/audio]

The technology was first introduced a year ago, and it has since been put to work in making the Google Assistant faster and more natural sounding. So far it's been utilized only in the US English and Japanese Assistant voices, but it will likely be rolled out to more languages in future. It's also now possible to choose from two different voices for the US Assistant, which adds some welcome variety. Hit the source link if you'd like to learn more about how WaveNet works and listen to some more examples. It's truly impressive stuff.

Thanks: Henny Roggy

Source: DeepMind