At the TED2025 conference, Google showcased its latest innovation: Android XR smart glasses integrated with Gemini AI. These prototype glasses demonstrated real-time object recognition, multilingual translation, and contextual assistance, marking a significant leap in wearable technology.
During the live demonstration, Google’s Shahram Izadi and Nishtha Bhatia highlighted the glasses’ capabilities. Gemini AI generated a haiku on demand, identified a book title glimpsed moments earlier, and translated signs between English and Farsi seamlessly.
The glasses also facilitated a natural conversation in Hindi without manual language settings, showcasing Gemini’s multilingual proficiency. Additionally, the device recognized a vinyl record and offered to play a related song, illustrating its contextual awareness.
Heads-up display with 3D maps
Navigation features included a heads-up display with 3D maps, providing turn-by-turn directions directly in the user’s line of sight. These functionalities aim to integrate AI assistance seamlessly into daily activities.
The Android XR platform, designed for extended reality devices, supports these smart glasses and upcoming headsets like Samsung’s Project Moohan. Google emphasizes an open, unified ecosystem, allowing developers to create immersive experiences using familiar Android tools.
While still in the prototype stage, these smart glasses represent a significant step toward ambient computing. By integrating AI directly into eyewear, Google envisions a future where technology assists users intuitively and unobtrusively.