A launch event without a barrage of parameters may turn out to be 2025’s most important technology inflection point. In the early hours of December 9, Beijing time—while most tech media were still dissecting the bleak sales of Apple’s Vision Pro—Google quietly held a special livestream in Mountain View, California, called “The Android Show: XR Edition.” There were no flashy stages throughout, no throat-rending “One More Thing.” With a restrained 30-minute demo, the search giant declared a key fact to the world: smart glasses are no longer props from science-fiction movies, but the core gateway to the next hundred-billion-level market in the AI era.
Project Aura: Gemini’s “first pair of native spatial eyes”
If you ask who the protagonist of this launch was, the answer is by no means a single piece of hardware, but Project Aura smart glasses jointly built by Google and Chinese AR hardware company XREAL. Called by Google “the most complete hardware sample to date and the closest to Android XR’s ideal form,” the product marks the first time the Gemini large model has truly gained the ability to “see the world.”
Split design: the perfect balance of weight and performance
Project Aura’s most disruptive design lies in its split architecture. Unlike Meta Quest or Apple Vision Pro, which follow an “all-in-one” approach that integrates all compute into the headset, Aura moves the battery and core computing components off the glasses body and connects through a “computing puck” that can be slipped into a pocket or clipped to the waist. This design brings two immediate advantages.
First, the worn weight drops sharply: the glasses body only handles display and sensing, with weight controlled at ordinary-sunglasses levels. According to XREAL, its supply chain is rooted in China’s Yangtze River Delta and has already achieved the world’s highest efficiency in iteration and mass production. This lightweight approach turns “all-day wear” from a slogan into a possibility. Second, performance does not get compromised: the computing puck houses Qualcomm’s Snapdragon XR2 Plus Gen 2 chipset—the same flagship processor used in Samsung’s Galaxy XR headset—which means Aura, while in glasses form, has spatial compute power comparable to high-end VR headsets.
70-degree field of view: a “critical-point breakthrough” for consumer AR
Project Aura adopts an Optical See-Through (OST) approach to achieve a 70-degree field of view. That number hides an industry-level breakthrough.
Currently, mainstream AR glasses using BirdBath optics typically offer around a 50-degree field of view. What users see of the virtual image is like observing the world through a small window—fine for a single window, but very cramped when multitasking. A 70-degree FOV means a qualitative leap in the size and immersion of the virtual screen—equivalent to projecting an ultra-large virtual display in front of the user, with multiple app windows displayed side-by-side. Seventy degrees is the maximum practical field of view OST can currently achieve; for AR experience, this uplift is transformative. A larger FOV not only means better viewing; it also gives the Gemini assistant a foundation for “understanding spatial relationships”—for example, it can simultaneously recognize the dining table, cookware, and ingredients in front of the user and overlay recipe guidance in the correct spatial positions.
X1S chip: foundational innovation from a Chinese team
Another highlight of Project Aura is XREAL’s self-developed X1S spatial computing chip. This signals that Chinese manufacturers are no longer satisfied with assembly and contract manufacturing but are delving into the most core technologies of XR devices.
This chip undertakes key tasks such as spatial positioning, gesture recognition, and environmental understanding. In the demo, users needed no controllers; with bare-hand gestures, they could drag virtual windows, zoom content, and even “pinch” virtual objects to view them in 3D. The smoothness of this interaction is built on the X1S chip’s hardware-level acceleration of SLAM (simultaneous localization and mapping) algorithms. The X1S’s heterogeneous architecture can significantly reduce the power consumption of spatial computing tasks while keeping latency under tight control. Such bottom-layer optimization enables Android XR to run stably on resource-constrained wearables and provides hardware guarantees for real-time responses of AI applications.
However, Project Aura still was not officially launched at this event, but Google also promised to release it next year.
Android XR: more than glasses—it is a “spatial AI operating system”
Project Aura’s dazzling performance is, in essence, a concentrated display of Android XR system capabilities. Google repeatedly emphasized at the launch that Android XR is no longer “an Android version that supports headsets,” but a brand-new computing platform with Gemini at its core.
From “seeing” to “understanding”: Gemini’s awakening of spatial intelligence
No matter how powerful traditional AI assistants are, they have been confined to the 2D plane of the phone screen. Large language models let AI “listen and speak,” and multimodal models let AI “see and draw,” but only spatial computing allows AI to truly understand the physical world. Project Aura seamlessly integrates XREAL’s optical display, self-developed chips, and spatial algorithms with Google’s Android XR platform and Gemini’s large-model capabilities. This combination truly gives AI “eyes” and a “brain,” enabling a closed loop of “see—understand—interact.” Its charm lies in practical landing: for example, in the kitchen, you can have floating recipe steps follow your gaze and gestures; on a trip, you can instantly unfold a virtual screen in front of your eyes to enjoy immersive media that doesn’t disturb others.
This “seeing → understanding → interaction” loop relies on the continuous, interactive, and intelligible spatial semantic model built by Android XR. Put simply, the system not only recognizes people and objects but also understands their spatial relationships, physical attributes, and behavioral logic. This is the key leap for AI from a “digital tool” to a “spatially intelligent partner.”
A unified development stack: lowering the threshold for XR app development
Google knows well that the prosperity of an ecosystem depends on developer participation. To that end, Android XR has brought out its killer move: full compatibility with the existing Android developer toolchain. At the event, Google announced that Jetpack, Compose, ARCore, Play Services, and other tools familiar to developers have been “re-projected onto spatial computing and wearables.” This means an ordinary Android developer need not learn an entirely new 3D engine or interaction paradigm; by calling new XR APIs, they can adapt existing apps to the glasses end. For example, porting a note-taking app to Aura is very straightforward; most business logic can be reused directly, with the main work being to adjust the UI layout from a 2D screen to spatial projection. This low-barrier strategy gives Android XR a chance to accumulate an app ecosystem quickly and avoid repeating the fate of Windows Phone.
Google also showcased the application of “desktop windowing” on Aura: Windows apps can appear in large windows inside the glasses, and users can even have Gemini guide complex software operations in real time. This “PC extended screen” positioning gives Aura a clear early use case—remote work and mobile productivity.
A three-pronged approach: covering use cases from professional to everyday
Intriguingly, Google also announced two other AI glasses. These two are wireless AI glasses co-developed with fashion brands Warby Parker and Gentle Monster, in a screenless version and a display-equipped version.
The screenless version is similar to Ray-Ban Meta, but deeply integrated with Google services. It has built-in speakers, microphones, and a camera, yet looks no different from ordinary glasses. Users can converse with Gemini via the glasses, take photos, listen to music, and obtain real-time translation.
The display-equipped version adds a monocular AR display on top of the screenless version to show simple information. Unlike Meta’s products, Google’s AI glasses act as phone accessories, with most computation done on the phone. A miniature display module integrated into the lens can show core information when needed, such as navigation arrows, message notifications, and translation captions. Using waveguide technology, the field of view is about 30 degrees, targeting “light notification” scenarios.
Google’s intention is clear: use the screenless glasses to cover everyday, “all-day wear” scenarios, and the display-equipped glasses to cover specific scenarios that “require visual feedback.” This combination avoids technological overreach while rapidly occupying user mindshare. No single form factor can satisfy everyone’s needs. Hardware must blend seamlessly into life and match personal style for AI to truly work. This “de-parameterization” mindset precisely hits the current pain points of XR devices being “bulky, odd, and hard to make everyday.”

Market shadow war: the “glasses anxiety” of the giants and Google’s “late-mover advantage”
As Google re-enters the smart-glasses market, it faces fierce competition with Meta and Apple, further muddying an already white-hot field. Each company is taking a different approach to gain an edge in this emerging market.
Meta’s Ray-Ban glasses, with sales surpassing 2 million pairs, have taken a certain early lead, but they are limited to photography, audio, and simple AI dialogue, lacking visual-feedback capability and criticized as “not smart enough.” Its high-end Orion product is technologically advanced but costs over ten thousand US dollars and depends on external compute units, limiting short-term commercialization.
Apple’s Vision Pro, while defining the interaction paradigm of spatial computing, is destined to be a niche geek toy due to its USD 3,490 price and 600-gram weight. According to supply-chain data, Vision Pro sales in 2025 were under 500,000 units, far below the industry’s 1-million expectation. Its closed ecosystem also raises the threshold for app-ecosystem expansion to some extent.
Chinese products such as Baidu’s “Xiaodu Glasses” and Li Auto’s Livis have a certain presence in the domestic market but lack support from a globalized operating system, limiting application scenarios.
Google’s trump card is this: deeply binding Gemini AI with Android XR, making glasses a natural extension of the Android ecosystem. This strategy has a threefold advantage—user base, developer ecosystem, and AI capability. In terms of user base, the world’s 3 billion Android device users need not learn a new system to use glasses functions seamlessly. For example, in Google Maps navigation on a phone, tapping the share button can directly project to the glasses’ view. In terms of the developer ecosystem, relying on the more than ten million Android developers worldwide gives a migration speed that Apple and Meta cannot match. In terms of AI capability, Gemini’s multimodal understanding leads the industry. On the authoritative MMMU benchmark (which measures a model’s multimodal understanding and reasoning), Gemini 3 Pro scored 81%, surpassing GPT-5.1’s 76% and Claude Opus’s 72%. Such advantages of a multimodal large model will be magnified infinitely in AR scenarios that require real-time environmental understanding.
XREAL founder Chi Xu said, “The core of AI glasses is AI, and I am optimistic about Google’s ability to integrate ecosystems.” This trinity of “system + AI + hardware” gives Google, though a latecomer, the possibility of overtaking on a curve.

[Disclaimer]: The above content reflects analysis of publicly available information, expert insights, and BCC research. It does not constitute investment advice. BCC is not responsible for any losses resulting from reliance on the views expressed herein. Investors should exercise caution.
