News
Meta Unveils Its Prototype Haptic Gloves For Virtual Reality
The gloves are capable of simulating complex sensations to provide their wearer with natural feedback when interacting with virtual objects.
Meta — formerly Facebook — is trying to create what it describes as an embodied version of the internet, and it’s working hard on many individual pieces that are supposed to enable users to interact with it. Recently, a team at Reality Labs (RL) Research has unveiled a prototype of virtual reality haptic gloves capable of simulating complex sensations to provide their wearer with natural feedback when interacting with virtual objects.
The gloves use arrays of microfluidic actuators driven by the world’s first high-speed microfluidic processor to achieve millisecond response times while keeping power consumption minimal — something that’s extremely important for any wearable hardware device.

Once ready for release, the gloves could be used to support many virtual reality use cases. “The value of hands to solving the interaction problem in AR and VR is immense” explained RL Research Director Sean Keller. “We use our hands to communicate with others, to learn about the world, and to take action within it. We can take advantage of a lifetime of motor learning if we can bring full hand presence into AR and VR”.
Unfortunately, a lot of work still needs to be done for the technology to leave the research lab where it’s being developed. According to Keller, the team has made groundbreaking progress across multiple scientific and engineering disciplines, but the light at the end of the tunnel is only starting to become visible.
Also Read: A Beginner’s Guide To Getting Started With NFTs
Meta isn’t the only company working on haptic gloves for virtual reality. There’s also HaptX, whose founder and CEO Jake Rubin has accused Meta of copying its patented designs. In an official statement, the company claims that Meta’s gloves appear to be substantially identical to HaptX’s patented technology.
“We welcome interest and competition in the field of microfluidic haptics; however, competition must be fair for the industry to thrive” said Rubin. Meta has yet to respond to the accusation, so stay tuned for updates.
News
Nano Banana 2 Arrives In MENA For Google Gemini Users
Google brings its latest image model to Gemini and Search, adding 4K output and tighter text control for regional users.
Google has opened access to Nano Banana 2 across the Middle East and North Africa, pushing its newest image model into everyday tools rather than keeping it inside the exclusive (and expensive) Pro tier.
The rollout spans the Google Gemini desktop and mobile apps, and extends to Google Search through Lens and AI Mode. Developers can also test it in preview via AI Studio and the Gemini API.
Nano Banana 2 runs on Gemini Flash, Google’s fast inference layer. The focus is speed, but also control. Users can export visuals from 512px up to 4K, adjusting aspect ratios for everything from vertical social posts to widescreen displays.
The model maintains character likeness across up to five figures and preserves fidelity for as many as 14 objects within a single workflow. This enables visual continuity across scenes, iterations, or edits — supporting projects like short films, storyboards, and multi-scene narratives. Text rendering has also been improved, delivering legible typography in mockups and greeting cards, with built-in translation and localization directly within images.
Also Read: RØDE Adds Direct iPhone Pairing To Wireless GO And Pro Mics
Under the hood, the system taps Gemini’s broader knowledge base and pulls in real-time information and imagery from web search to render specific subjects more accurately. Lighting and fine detail have been upgraded, without slowing output.
By embedding the model inside Gemini and Search, Google is normalizing advanced image generation for a mass audience. In MENA, where startups and marketing teams are leaning heavily on AI to scale content across languages and borders, that shift lands at a practical moment.
The move also folds creative tooling deeper into search itself, so that image generation is no longer a separate workflow. It now sits right next to the query box.
