News
Saudi Arabia Plans Digital Twins For 5 Cities, Including Mecca
The project involves the creation of a cloud-based platform that will become central to the Kingdom’s smart city project.
South Korean tech company Naver has signed a contract with the Saudi Arabian Ministry of Municipal, Rural Affairs and Housing (MOMRAH) to build and administer digital twins for five of the country’s biggest cities: Riyadh, Medina, Jeddah, Dammam, and Mecca.
The news comes after a visit from South Korean President Yoon Suk Yeol, who arrived in the Kingdom to discuss deepening economic ties, explaining that “If South Korea, which has cutting-edge technologies and a successful experience of industrial development, joins hands with Saudi Arabia, with its abundant capital and growth potential, we can create synergy stronger than any other nation”.

Naver has already signed a memorandum with MOMRAH to support Saudi Arabia’s digital transformation. Discussions have also taken place with Majed Al Hogail, Saudi Arabia’s Minister of Housing, on digitizing other aspects of city planning, transportation, and public safety.
The digital twin program reflects ongoing efforts to boost decision-making and improve digitization using AI, robotics, and cloud-based solutions. The project will be pivotal in the development of smart city infrastructure and will be used for a wide variety of tasks, including urban planning and flood monitoring.
Also Read: Saudi Arabia Launches Summer 2024 eSports World Cup
“Leveraging Naver’s globally competitive technologies, we aim to spearhead the second wave of export boom to the Middle East. With this project as a starting point, Naver will also act as a bridge for Korean IT startups entering the Middle Eastern market,” announced Chae Seon-ju, President of ESG and External Policy at Naver.
Korean company Naver emphasized that digital twins platform could become the foundation for numerous technologies and services in what could become a continually evolving project. South Korean and Saudi startups could also use the open platform cloud software for urban water management, real estate services, robotics, autonomous driving applications, and traffic planning.
News
Nano Banana 2 Arrives In MENA For Google Gemini Users
Google brings its latest image model to Gemini and Search, adding 4K output and tighter text control for regional users.
Google has opened access to Nano Banana 2 across the Middle East and North Africa, pushing its newest image model into everyday tools rather than keeping it inside the exclusive (and expensive) Pro tier.
The rollout spans the Google Gemini desktop and mobile apps, and extends to Google Search through Lens and AI Mode. Developers can also test it in preview via AI Studio and the Gemini API.
Nano Banana 2 runs on Gemini Flash, Google’s fast inference layer. The focus is speed, but also control. Users can export visuals from 512px up to 4K, adjusting aspect ratios for everything from vertical social posts to widescreen displays.
The model maintains character likeness across up to five figures and preserves fidelity for as many as 14 objects within a single workflow. This enables visual continuity across scenes, iterations, or edits — supporting projects like short films, storyboards, and multi-scene narratives. Text rendering has also been improved, delivering legible typography in mockups and greeting cards, with built-in translation and localization directly within images.
Also Read: RØDE Adds Direct iPhone Pairing To Wireless GO And Pro Mics
Under the hood, the system taps Gemini’s broader knowledge base and pulls in real-time information and imagery from web search to render specific subjects more accurately. Lighting and fine detail have been upgraded, without slowing output.
By embedding the model inside Gemini and Search, Google is normalizing advanced image generation for a mass audience. In MENA, where startups and marketing teams are leaning heavily on AI to scale content across languages and borders, that shift lands at a practical moment.
The move also folds creative tooling deeper into search itself, so that image generation is no longer a separate workflow. It now sits right next to the query box.
