News
Truecaller To Use Microsoft Azure AI Speech For Call Answering
The new service features a powerful speech generation tool to allow users to create AI versions of their voices.
Truecaller, a well-known app for identifying and blocking spam calls, is enhancing its services by allowing users to create AI versions of their voices. The new feature, available to those with access to Truecaller’s AI Assistant, stems from a partnership with Microsoft and its Azure AI Speech tool, allowing the generation of realistic AI voices that accurately mimic users’ speech patterns and tone.
“This groundbreaking capability not only adds a touch of familiarity and comfort for the users but also showcases the power of AI in transforming the way we interact with our digital assistants,” explained Truecaller product director and general manager Raphael Mimoun in a recent blog post.
The AI Assistant in Truecaller screens incoming calls, informing recipients of a caller’s purpose. Based on this information, users can decide whether to answer the call themselves or let the AI Assistant handle it.
When the feature was introduced in 2022, users could only choose from a collection of preset voices. The ability to record one’s own voice represents a significant step towards the complete personalization of the service.
Also Read: Getting Started With Google Gemini: A Beginner’s Guide
Azure AI Speech, showcased during the last Build conference, only recently added a personal voice feature that lets people record and replicate voices. Microsoft explained in a blog post, however, that Personal Voice is available on a limited basis and only for specific use cases like voice assistants.
To maintain ethical standards, Microsoft’s Azure AI Speech automatically adds watermarks to AI-generated voices. Additionally, a code of conduct requires companies to obtain full consent from individuals being recorded and prohibits impersonation.
News
OpenAI’s ChatGPT Health Is A Private Space For Health Data
A new health mode lets the popular AI platform tap medical records and fitness apps while walling off sensitive information.
OpenAI has created ChatGPT Health, a separate space inside its chatbot platform for handling medical and wellness data. The opt-in feature starts with a small US cohort before widening out.
Health-related questions have long driven traffic to AI tools. OpenAI says over 230 million people ask ChatGPT about health or insurance each week. The new mode adds personal context to that behavior but stops short of diagnosis or treatment advice.
Users can connect records from participating US providers through b.well and link apps such as Apple Health, MyFitnessPal, Function and Weight Watchers. Some links are US-only, while Apple Health needs iOS. Once connected, ChatGPT can surface patterns in labs, summarize information ahead of a clinic visit or help map diet and exercise choices against past data.
The data sits apart from other chat information. Health has its own memories and does not spill into other conversations. Users can view or delete health memories at any time. OpenAI says this material is not used to train its models.
Security is much heavier in this section too. Health adds isolation and purpose-built encryption on top of the platform’s baseline protections. App connections require explicit permission, and disconnecting cuts the feed immediately.
“ChatGPT Health is another step toward turning ChatGPT into a personal super-assistant that can support you with information and tools to achieve your goals across any part of your life,” wrote Fidji Simo, OpenAI’s applications chief.
Also Read: Deliverect Rolls Out Self-Order Kiosks Across MENA
Physicians had input during development, though OpenAI has not detailed how that shaped the end product. The launch follows Health Bench, a dataset released in May to test models on realistic medical cases.
While currently rooted in the US healthcare ecosystem, the approach may draw interest in the Gulf and wider MENA markets as governments push digital health records and patient portals under modernization programs. Adoption will depend on whether users trust an AI assistant with such personal material and whether it fits clinical routines.
For OpenAI, the move marks a cautious step into regulated terrain and signals a shift toward sector-specific uses of generative AI.
