News
Google Adds Arabic AI Music Creation For Ramadan
Lyria 3 brings 30-second Arabic tracks and AI greeting cards to Gemini as Google targets seasonal digital engagement.
Google has released Lyria 3, its latest generative music model from DeepMind, in Arabic beta worldwide through Gemini, with mobile access rolling out over the next few days.
The update lets users create 30-second tracks by typing a simple prompt. A request such as “an upbeat, modern Arabic fusion track for Ramadan” produces a short composition within seconds, with optional lyrics or as an instrumental.
The Ramadan timing is deliberate, as Google is positioning Gemini as a tool for personalized audio greetings and quick-share content in Arabic. Alongside music, users can generate customized Ramadan greeting cards using NanoBanana, Gemini’s image generation and editing model, via a dedicated microsite available in English and Arabic.
Lyria 3 works in three ways. Users can generate tracks from text prompts describing a mood or theme. They can upload photos or videos and have Gemini compose lyrics and music to match the visuals. Each track comes with auto-generated cover art from NanoBanana and can be downloaded or shared by link.
Google is clear about the intent. “The goal of these tracks isn’t to create a musical masterpiece, but rather to give you a unique way to express yourself,” the company said.
All audio generated in the Gemini app is embedded with SynthID, Google’s watermarking technology for identifying AI-created content. Users can also upload a file and ask Gemini whether it was generated using Google AI, with the system checking for SynthID and applying its own detection methods.
Also Read: RØDE Adds Direct iPhone Pairing To Wireless GO And Pro Mics
The model is designed for original output, not imitation. If a prompt names a specific artist, Gemini treats it as general stylistic inspiration and applies filters to avoid reproducing existing material.
For Google, the Arabic rollout signals a continued push to localize generative AI for regional audiences. As MENA markets accelerate digital adoption under programs such as Vision 2030, culturally tuned AI features are becoming a practical entry point for mass use.
News
Nano Banana 2 Arrives In MENA For Google Gemini Users
Google brings its latest image model to Gemini and Search, adding 4K output and tighter text control for regional users.
Google has opened access to Nano Banana 2 across the Middle East and North Africa, pushing its newest image model into everyday tools rather than keeping it inside the exclusive (and expensive) Pro tier.
The rollout spans the Google Gemini desktop and mobile apps, and extends to Google Search through Lens and AI Mode. Developers can also test it in preview via AI Studio and the Gemini API.
Nano Banana 2 runs on Gemini Flash, Google’s fast inference layer. The focus is speed, but also control. Users can export visuals from 512px up to 4K, adjusting aspect ratios for everything from vertical social posts to widescreen displays.
The model maintains character likeness across up to five figures and preserves fidelity for as many as 14 objects within a single workflow. This enables visual continuity across scenes, iterations, or edits — supporting projects like short films, storyboards, and multi-scene narratives. Text rendering has also been improved, delivering legible typography in mockups and greeting cards, with built-in translation and localization directly within images.
Also Read: RØDE Adds Direct iPhone Pairing To Wireless GO And Pro Mics
Under the hood, the system taps Gemini’s broader knowledge base and pulls in real-time information and imagery from web search to render specific subjects more accurately. Lighting and fine detail have been upgraded, without slowing output.
By embedding the model inside Gemini and Search, Google is normalizing advanced image generation for a mass audience. In MENA, where startups and marketing teams are leaning heavily on AI to scale content across languages and borders, that shift lands at a practical moment.
The move also folds creative tooling deeper into search itself, so that image generation is no longer a separate workflow. It now sits right next to the query box.
