News
Truecaller To Use Microsoft Azure AI Speech For Call Answering
The new service features a powerful speech generation tool to allow users to create AI versions of their voices.
Truecaller, a well-known app for identifying and blocking spam calls, is enhancing its services by allowing users to create AI versions of their voices. The new feature, available to those with access to Truecaller’s AI Assistant, stems from a partnership with Microsoft and its Azure AI Speech tool, allowing the generation of realistic AI voices that accurately mimic users’ speech patterns and tone.
“This groundbreaking capability not only adds a touch of familiarity and comfort for the users but also showcases the power of AI in transforming the way we interact with our digital assistants,” explained Truecaller product director and general manager Raphael Mimoun in a recent blog post.
The AI Assistant in Truecaller screens incoming calls, informing recipients of a caller’s purpose. Based on this information, users can decide whether to answer the call themselves or let the AI Assistant handle it.
When the feature was introduced in 2022, users could only choose from a collection of preset voices. The ability to record one’s own voice represents a significant step towards the complete personalization of the service.
Also Read: Getting Started With Google Gemini: A Beginner’s Guide
Azure AI Speech, showcased during the last Build conference, only recently added a personal voice feature that lets people record and replicate voices. Microsoft explained in a blog post, however, that Personal Voice is available on a limited basis and only for specific use cases like voice assistants.
To maintain ethical standards, Microsoft’s Azure AI Speech automatically adds watermarks to AI-generated voices. Additionally, a code of conduct requires companies to obtain full consent from individuals being recorded and prohibits impersonation.
News
Nano Banana 2 Arrives In MENA For Google Gemini Users
Google brings its latest image model to Gemini and Search, adding 4K output and tighter text control for regional users.
Google has opened access to Nano Banana 2 across the Middle East and North Africa, pushing its newest image model into everyday tools rather than keeping it inside the exclusive (and expensive) Pro tier.
The rollout spans the Google Gemini desktop and mobile apps, and extends to Google Search through Lens and AI Mode. Developers can also test it in preview via AI Studio and the Gemini API.
Nano Banana 2 runs on Gemini Flash, Google’s fast inference layer. The focus is speed, but also control. Users can export visuals from 512px up to 4K, adjusting aspect ratios for everything from vertical social posts to widescreen displays.
The model maintains character likeness across up to five figures and preserves fidelity for as many as 14 objects within a single workflow. This enables visual continuity across scenes, iterations, or edits — supporting projects like short films, storyboards, and multi-scene narratives. Text rendering has also been improved, delivering legible typography in mockups and greeting cards, with built-in translation and localization directly within images.
Also Read: RØDE Adds Direct iPhone Pairing To Wireless GO And Pro Mics
Under the hood, the system taps Gemini’s broader knowledge base and pulls in real-time information and imagery from web search to render specific subjects more accurately. Lighting and fine detail have been upgraded, without slowing output.
By embedding the model inside Gemini and Search, Google is normalizing advanced image generation for a mass audience. In MENA, where startups and marketing teams are leaning heavily on AI to scale content across languages and borders, that shift lands at a practical moment.
The move also folds creative tooling deeper into search itself, so that image generation is no longer a separate workflow. It now sits right next to the query box.
