News
New Artificial Skin For Robots Allows Them To Feel Things
A groundbreaking new development from a Caltech researcher means that robots will soon be able to “feel” their surroundings, with sensations relayed back to human operators.
Caltech assistant professor of medical engineering, Wei Gao, has developed a new platform for robots and their operators known as M-Bot. When it hits the mainstream, the technology will allow humans to control robots more precisely and help protect them in hostile environments.
The platform is based around an artificial skin that effectively gives robots a sense of touch. The newly developed tool also uses machine learning and forearm sensors to allow human users to control robots with their own movements while receiving delicate haptic feedback through their skin.
The synthetic skin is composed of a gelatinous hydrogel and makes robot fingertips function much like our own. Inside the gel, layers of tiny micrometer sensors — applied similarly to Inkjet printing — detect and report touch through very gentle electrical stimulation. For example, if a robotic hand picked up an egg too firmly, the artificial skin sensors would give feedback to the human operator on the sensation of the shell being crushed.
Also Read: Futuristic Electric Self-Driving Trucks Are Coming To The UAE
Wei Gao and his Caltech team hope the system will eventually find applications in everything from agriculture and environmental protection to security. The developers also note that robot operators will be able to “feel” their surroundings, including sensing how much fertilizer or pesticide is being applied to crops or whether suspicious bags contain traces of explosives.
Abdulmotaleb El Saddik, Professor of Computer Vision at Mohamed bin Zayed University of Artificial Intelligence, has noted that the new development offers even more applications and possibilities: “The ability to physically feel the touch, including handshakes and shoulder patting, could contribute to creating a sense of connection and empathy, enhancing the quality of interactions, particularly for the elderly and people living at a distance or those who are in space [such as] astronauts connecting with their family and children”.
News
Nano Banana 2 Arrives In MENA For Google Gemini Users
Google brings its latest image model to Gemini and Search, adding 4K output and tighter text control for regional users.
Google has opened access to Nano Banana 2 across the Middle East and North Africa, pushing its newest image model into everyday tools rather than keeping it inside the exclusive (and expensive) Pro tier.
The rollout spans the Google Gemini desktop and mobile apps, and extends to Google Search through Lens and AI Mode. Developers can also test it in preview via AI Studio and the Gemini API.
Nano Banana 2 runs on Gemini Flash, Google’s fast inference layer. The focus is speed, but also control. Users can export visuals from 512px up to 4K, adjusting aspect ratios for everything from vertical social posts to widescreen displays.
The model maintains character likeness across up to five figures and preserves fidelity for as many as 14 objects within a single workflow. This enables visual continuity across scenes, iterations, or edits — supporting projects like short films, storyboards, and multi-scene narratives. Text rendering has also been improved, delivering legible typography in mockups and greeting cards, with built-in translation and localization directly within images.
Also Read: RØDE Adds Direct iPhone Pairing To Wireless GO And Pro Mics
Under the hood, the system taps Gemini’s broader knowledge base and pulls in real-time information and imagery from web search to render specific subjects more accurately. Lighting and fine detail have been upgraded, without slowing output.
By embedding the model inside Gemini and Search, Google is normalizing advanced image generation for a mass audience. In MENA, where startups and marketing teams are leaning heavily on AI to scale content across languages and borders, that shift lands at a practical moment.
The move also folds creative tooling deeper into search itself, so that image generation is no longer a separate workflow. It now sits right next to the query box.
