News
Dubai Crown Prince Takes Test Ride In Self-Driving Taxi
The Chevrolet Bolt-based Cruise AVs are helping to cement Dubai’s position as a global leader of self-driving transport.
Crown Prince of Dubai and Chairman of The Executive Council, Sheikh Hamdan bin Mohammed bin Rashid Al Maktoum, recently took the first demo test ride of a Chevrolet Bolt-based Cruise autonomous vehicle (AV) in Jumeirah.
The Dubai Crown Prince was welcomed by the Commander-in-Chief of the Dubai Police and a team of engineers from the RTA and Cruise. Mattar Al Tayer, Director General and Chairman of the Board of Executive Directors of the RTA said, “Autonomous vehicles will play a pivotal role in offering innovative solutions for transportation challenges, curbing urban congestion, and elevating road safety. They support the RTA’s efforts to leverage the integration between mass transport systems and easing the mobility of public transport riders, providing services to many underserved users such as senior residents and People of Determination”.

In April 2021, Dubai’s Roads and Transport Authority (RTA) and Cruise entered a partnership to introduce a self-driving ride-hail service. The testing of Cruise AVs marks a crucial step toward enhancing Dubai’s position as a global leader in self-driving transport. The emirate aims to convert 25% of all mobility journeys to self-driving modes by 2030.
Also Read: Dubai Survey Drones Explore Minerals In Central Asia
In April this year, digital mapping for self-driving Cruise vehicles took place in Jumeirah 1 using the company’s HD mapping technology. Cruise initiated limited vehicle testing in October, deploying five autonomous taxis overseen by safety drivers. The Dubai Roads and Transport Authority plans to soon introduce a public registration process, enabling selected residents to use the Cruise ride-hailing app.
News
Nano Banana 2 Arrives In MENA For Google Gemini Users
Google brings its latest image model to Gemini and Search, adding 4K output and tighter text control for regional users.
Google has opened access to Nano Banana 2 across the Middle East and North Africa, pushing its newest image model into everyday tools rather than keeping it inside the exclusive (and expensive) Pro tier.
The rollout spans the Google Gemini desktop and mobile apps, and extends to Google Search through Lens and AI Mode. Developers can also test it in preview via AI Studio and the Gemini API.
Nano Banana 2 runs on Gemini Flash, Google’s fast inference layer. The focus is speed, but also control. Users can export visuals from 512px up to 4K, adjusting aspect ratios for everything from vertical social posts to widescreen displays.
The model maintains character likeness across up to five figures and preserves fidelity for as many as 14 objects within a single workflow. This enables visual continuity across scenes, iterations, or edits — supporting projects like short films, storyboards, and multi-scene narratives. Text rendering has also been improved, delivering legible typography in mockups and greeting cards, with built-in translation and localization directly within images.
Also Read: RØDE Adds Direct iPhone Pairing To Wireless GO And Pro Mics
Under the hood, the system taps Gemini’s broader knowledge base and pulls in real-time information and imagery from web search to render specific subjects more accurately. Lighting and fine detail have been upgraded, without slowing output.
By embedding the model inside Gemini and Search, Google is normalizing advanced image generation for a mass audience. In MENA, where startups and marketing teams are leaning heavily on AI to scale content across languages and borders, that shift lands at a practical moment.
The move also folds creative tooling deeper into search itself, so that image generation is no longer a separate workflow. It now sits right next to the query box.
