The hype surrounding the concept of Google's much-talked-about Project Glass may have hit its first peak during last year's Google I/O conference when stuntmen jumped out of a plane wearing the device, but the demonstration left many people wanting an explanation of what else Glass can do besides first-person photo/video recording.

Since then, we've seen a few admittedly awesome videos, including a DVF fashion show through glass, and more recently the brilliantly-executed "How It Feels" which went a bit further toward showing real-world use, but at SXSW today, attendees were given what might be the most informative (and exciting) demo we've seen yet.

At a Glass developer panel, Developer Advocate Timothy Jordan showed off Glass' capabilities with a live feed of the device on a big screen for all to see. Besides the simple "ok glass" home screen, Jordan explained touch gestures (down to dismiss/go back, left/right to scroll back and forth through past – timeline –  actions or to access Now cards), sharing media, and Glass' bone conduction audio, which delivered audio, in Jordan's words, "just for me." Jordan also glossed over the fact that Glass accepts head gestures, meaning the futuristic glasses can recognize at least three user inputs: touch, gesture, and voice.

Update: Another 3-minute video, this one showing receiving and replying to emails, voice dictation mode, and Skitch integration:

During the panel, several apps were also demoed on Glass. Among them were Gmail, Evernote, The New York Times, and Path. Gmail works as you'd expect, showing brief notifications of new messages (which can be limited to "important" emails), and allowing for voice-input replies. Evernote will allow users to share photos directly to Skitch, and The New York Times will give you hourly updates on important news with photos and headlines, reading the story to you aloud if you so choose. Finally, Path will deliver friends' photos, allowing for simple emoticon or voice-input replies.


image image image

What's more, Google's Glass-centric Mirror API got some acknowledgment. Once it is released, said the panel, developers will be able to create their own "timeline cards," using HTML and various media to deliver content to users, so long as they stay true to four main tenets of Glass development: design for glass, don't get in the way, keep it timely, and avoid the unexpected.

Frankly, the information revealed at the SXSW panel is the most exciting I've seen regarding Glass until now, and as we race toward Glass' alleged pre-2014 consumer release, things promise to get even more interesting.

Thanks, Ron!

via The Verge