To many of us, a simple command such as "Alexa, what am I holding?" might seem redundant, but it could be indispensable for blind or visually impaired users. To that end, Amazon has introduced a new object recognition feature for its Echo Show devices, something Google should take note of now it also has a camera-equipped smart display in the Nest Hub Max.

The new feature, called Show and Tell, was developed in collaboration with the Vista Center for the Blind and Visually Impaired in Santa Cruz, California. So not only was the feature designed for those users, it has been comprehensively tested with them to make sure it's as helpful as possible. Check out one such case study in the video below.

Amazon sees grocery identification as the main use case for Show and Tell, which will use computer vision and machine learning technologies to help users navigate what could otherwise be a tricky task. It's already live for users in the US on both first and second-generation Echo Show hardware. Hopefully, it'll roll out to more people around the world in the near future.