News
WhatsApp Update Will Debut AI Voice Mode With 10 Variants
WhatsApp is developing a new feature that will allow users to choose between various Meta AI chat mode voices.
WhatsApp is working on a new feature that will allow users to interact with Meta AI using a variety of voices. Although this feature is not yet available for beta testing, WABetaInfo has revealed that the latest WhatsApp beta update for Android (version 2.24.17.16) includes information about selecting the Meta AI voices.
The upcoming feature will enable users to converse with Meta AI in real-time. Although the chat mode is still under development, current reports indicate that users will be able to choose from ten different voices for Meta AI. At present, this feature is only available on the Android version of WhatsApp. However, it is expected that iOS users will eventually be able to use the Meta AI chat mode too.
It’s important to note that even if you manage to download and install the WhatsApp beta version on your Android device, its functionality will likely be restricted, as it is currently still in the development stage. Screenshots shared by the feature tracker reveal a new voice icon in the Meta AI chat interface, represented by an audio waveform next to the text field.
Upon selecting this icon, a bottom sheet appears with “Meta AI” displayed at the top. In the center, a circular design composed of several bubbles is visible. At the bottom of the sheet, the message “Hi, how can I help?” is shown along with a larger audio waveform icon, indicating that the AI is ready to listen.
Also Read: Top Free AI Chatbots Available In The Middle East
Additional screenshots imply that the Meta AI voice mode will offer users up to ten unique voices to choose between. The differences between these voices are not yet clear, but they may vary in accents, levels of enthusiasm, or even tonal characteristics. However, it’s unlikely that any of the voices will support multiple languages on the update’s initial release.
News
Nano Banana 2 Arrives In MENA For Google Gemini Users
Google brings its latest image model to Gemini and Search, adding 4K output and tighter text control for regional users.
Google has opened access to Nano Banana 2 across the Middle East and North Africa, pushing its newest image model into everyday tools rather than keeping it inside the exclusive (and expensive) Pro tier.
The rollout spans the Google Gemini desktop and mobile apps, and extends to Google Search through Lens and AI Mode. Developers can also test it in preview via AI Studio and the Gemini API.
Nano Banana 2 runs on Gemini Flash, Google’s fast inference layer. The focus is speed, but also control. Users can export visuals from 512px up to 4K, adjusting aspect ratios for everything from vertical social posts to widescreen displays.
The model maintains character likeness across up to five figures and preserves fidelity for as many as 14 objects within a single workflow. This enables visual continuity across scenes, iterations, or edits — supporting projects like short films, storyboards, and multi-scene narratives. Text rendering has also been improved, delivering legible typography in mockups and greeting cards, with built-in translation and localization directly within images.
Also Read: RØDE Adds Direct iPhone Pairing To Wireless GO And Pro Mics
Under the hood, the system taps Gemini’s broader knowledge base and pulls in real-time information and imagery from web search to render specific subjects more accurately. Lighting and fine detail have been upgraded, without slowing output.
By embedding the model inside Gemini and Search, Google is normalizing advanced image generation for a mass audience. In MENA, where startups and marketing teams are leaning heavily on AI to scale content across languages and borders, that shift lands at a practical moment.
The move also folds creative tooling deeper into search itself, so that image generation is no longer a separate workflow. It now sits right next to the query box.
