Meta Goes Even Harder Into Smart Glasses With 3 New Models
Briefly

Meta Goes Even Harder Into Smart Glasses With 3 New Models
"Of course talking to Meta AI remains a key way of interacting with the glasses, but Meta hopes that adding the visual elements will enhance the chatbot experience. For example, live speech captioning and language translation is still switched on by voice-but with Meta Ray-Ban Display, you can see the translations and captions appearing in real time on the glasses rather than on your phone's screen."
"For times when talking might be difficult, Meta also showed off a feature that tracks handwriting input as an alternative to voice commands. Aimed at quick messages, the user can "draw" letters with an outstretched finger on a flat service (or your leg), and the Neural Band will turn it into text. Though the feature was part of the demo we received, Meta says it won't be available to users at launch, but will arrive soon. Who knows, maybe this will be the thing that helps save handwriting."
Meta Ray-Ban Display pairs voice-based Meta AI with on-glasses visual overlays to present live speech captions and language translations directly in the user's field of view. The glasses can display visually rich information from front-facing cameras when asked what the user is looking at and can overlay turn-by-turn AR directions on the real world while walking. A handwriting input feature tracks finger-drawn letters and converts them to text via the Neural Band, though that feature will not be available at launch. Some third-party integrations are limited at launch, such as Spotify and Instagram. An earlier Orion prototype required an external puck for heavier AR computing and offered a fuller AR feature set.
Read at WIRED
Unable to calculate read time
[
|
]