Connect with us

News

Oakley And Meta Reveal Performance-Focused AI Smart Glasses

The AI-powered wearables are designed for athletes, combining voice control, hands-free capture, and enhanced optics.

Published

on

oakley and meta reveal performance-focused ai smart glasses

Meta has partnered with Oakley to launch a new line of wearable devices under the “Oakley Meta” brand, blending Oakley’s sports-focused design with Meta’s voice-enabled, AI-powered tech. The first product, called Oakley Meta HSTN, is aimed at athletes and active users looking for hands-free access to information, media, and recording tools.

Positioned as a continuation of Meta’s expansion into wearables — following the Ray-Ban Meta line — the new collaboration adds a performance angle, with features tailored to sport and outdoor environments. Oakley Meta HSTN includes an embedded camera for video capture, open-ear speakers for audio playback, and integration with Meta’s voice assistant for hands-free prompts and queries.

Battery life is reportedly extended, with up to eight hours of typical use and 19 hours on standby. A dedicated charging case provides up to 48 hours of total use. Video quality is also upgraded, with support for 3K resolution, offering more detail than the previous 1080p standard found in earlier Meta glasses.

The glasses are IPX4-rated for water resistance and will be available in multiple frame and lens combinations, some featuring Oakley’s PRIZM lens technology, which enhances contrast and clarity by selectively filtering light. Prescription-ready models are also available.

Also Read: Emirates And Etihad Soar In 2025 Global Airline Safety Ranking

Voice features are powered by Meta’s on-device assistant. Users can initiate commands such as checking wind speeds, asking sport-related questions, or capturing footage via voice prompts. The assistant is designed to respond to real-time queries without needing to access a separate device.

While the launch is backed by a marketing campaign featuring athletes such as Kylian Mbappé and J.R. Smith, the product is also part of a wider strategic push by Meta and EssilorLuxottica to expand connected eyewear into more specialized use cases. Availability begins in July with a limited edition model, followed by a full rollout later in the year.

Advertisement

📢 Get Exclusive Monthly Articles, Updates & Tech Tips Right In Your Inbox!

JOIN 23K+ SUBSCRIBERS

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

News

Nano Banana 2 Arrives In MENA For Google Gemini Users

Google brings its latest image model to Gemini and Search, adding 4K output and tighter text control for regional users.

Published

on

nano banana 2 arrives in mena for google gemini users
Google

Google has opened access to Nano Banana 2 across the Middle East and North Africa, pushing its newest image model into everyday tools rather than keeping it inside the exclusive (and expensive) Pro tier.

The rollout spans the Google Gemini desktop and mobile apps, and extends to Google Search through Lens and AI Mode. Developers can also test it in preview via AI Studio and the Gemini API.

Nano Banana 2 runs on Gemini Flash, Google’s fast inference layer. The focus is speed, but also control. Users can export visuals from 512px up to 4K, adjusting aspect ratios for everything from vertical social posts to widescreen displays.

The model maintains character likeness across up to five figures and preserves fidelity for as many as 14 objects within a single workflow. This enables visual continuity across scenes, iterations, or edits — supporting projects like short films, storyboards, and multi-scene narratives. Text rendering has also been improved, delivering legible typography in mockups and greeting cards, with built-in translation and localization directly within images.

Also Read: RØDE Adds Direct iPhone Pairing To Wireless GO And Pro Mics

Under the hood, the system taps Gemini’s broader knowledge base and pulls in real-time information and imagery from web search to render specific subjects more accurately. Lighting and fine detail have been upgraded, without slowing output.

By embedding the model inside Gemini and Search, Google is normalizing advanced image generation for a mass audience. In MENA, where startups and marketing teams are leaning heavily on AI to scale content across languages and borders, that shift lands at a practical moment.

The move also folds creative tooling deeper into search itself, so that image generation is no longer a separate workflow. It now sits right next to the query box.

Continue Reading

#Trending