News
Instagram AI Studio: Create Custom Chatbots With Your Personality
Meta’s new feature allows users to create custom chatbots with unique personalities that can interact with fans and help in role-play scenarios.
Meta has unveiled a new tool called AI Studio, enabling users to create virtual characters with personalized traits and interests, including versions of themselves that can interact with audiences through direct messages.
AI Studio is initially rolling out to Instagram Business account holders and will be available to all Meta users in the United States within the next few weeks. The platform will be accessible via ai.meta.com/ai-studio and the Instagram app, as well as through WhatsApp, Messenger, and browsers.
Mark Zuckerberg, Meta’s CEO, envisions users creating custom AI chatbots for entertainment purposes or as personal support tools. For instance, chatbots could be used for role-play scenarios like negotiating a pay rise or resolving a conflict with a friend. These kinds of interactions provide a safe space for users to practice and receive feedback on various social situations.
To ensure responsible use, AI Studio includes features that allow users to restrict who can interact with their chatbots and control the topics they discuss. The platform’s usage policy also explicitly prohibits the creation of chatbots deemed hateful, explicit, or illegal.
In a blog post, Meta highlighted several chatbots developed by celebrities using AI Studio. For example, chef Marc Murphy created a chatbot named “Eat Like You Live There!” to offer dining recommendations, while photographer Angel Barclay designed “What Lens Bro,” a bot providing photography tips.
Also Read: Top Free AI Chatbots Available In The Middle East
Meta’s AI Studio handbook provides guidance on customizing chatbots, allowing users to input a detailed description, choose a name, and upload an image. They can also specify how bots should respond to particular prompts. The system, leveraging Meta’s powerful Llama model, improvises responses based on these instructions.
In addition to AI Studio, Meta also introduced another innovative tool, known as Segment Anything Model (SAM) 2. The technology can identify and track the contents of images and videos, which Zuckerberg demonstrated by showcasing its ability to monitor cattle on his vast ranch in Kauai. The Facebook founder also noted that SAM2 has broader applications, such as studying coral reefs, natural habitats, and landscape changes.
News
Nano Banana 2 Arrives In MENA For Google Gemini Users
Google brings its latest image model to Gemini and Search, adding 4K output and tighter text control for regional users.
Google has opened access to Nano Banana 2 across the Middle East and North Africa, pushing its newest image model into everyday tools rather than keeping it inside the exclusive (and expensive) Pro tier.
The rollout spans the Google Gemini desktop and mobile apps, and extends to Google Search through Lens and AI Mode. Developers can also test it in preview via AI Studio and the Gemini API.
Nano Banana 2 runs on Gemini Flash, Google’s fast inference layer. The focus is speed, but also control. Users can export visuals from 512px up to 4K, adjusting aspect ratios for everything from vertical social posts to widescreen displays.
The model maintains character likeness across up to five figures and preserves fidelity for as many as 14 objects within a single workflow. This enables visual continuity across scenes, iterations, or edits — supporting projects like short films, storyboards, and multi-scene narratives. Text rendering has also been improved, delivering legible typography in mockups and greeting cards, with built-in translation and localization directly within images.
Also Read: RØDE Adds Direct iPhone Pairing To Wireless GO And Pro Mics
Under the hood, the system taps Gemini’s broader knowledge base and pulls in real-time information and imagery from web search to render specific subjects more accurately. Lighting and fine detail have been upgraded, without slowing output.
By embedding the model inside Gemini and Search, Google is normalizing advanced image generation for a mass audience. In MENA, where startups and marketing teams are leaning heavily on AI to scale content across languages and borders, that shift lands at a practical moment.
The move also folds creative tooling deeper into search itself, so that image generation is no longer a separate workflow. It now sits right next to the query box.
