News
Meta & Microsoft Release AI Language Tool For Commercial Use
The open-source AI model, called Llama 2, will be available through the Azure AI catalog and Amazon Web Services, as well as in a standalone Windows version.
Meta and Microsoft have partnered to create Llama 2, a “next-generation large language AI model” for commercial and research applications. Llama 2’s open-source code places greater importance on responsibility and includes a reasonable use guide, plus an acceptable use policy to prevent criminal applications, misleading information, and spam.
Meta is releasing pre-trained and conversation-oriented versions of Llama 2 for free. Meanwhile, Microsoft is making the AI tool available through the Azure AI catalog to use with cloud tools, including content filtering. Llama 2 can also run directly on Windows PCs and will be available through outside providers such as Amazon Web Services and Hugging Face.
Major rivals like the popular OpenAI GPT-4 are often locked down for greater subscription or licensing revenue, but Llama 2’s Open Source code lets companies customize the AI technology for their own purposes — such as chatbots and image generators — while providing a way for outsiders to check for biases, inaccuracies, and operating flaws.
Also Read: The Largest Data Breaches In The Middle East
For Microsoft, Llama 2 is an important project in the fight against AI rivals — notably Google. Microsoft already uses OpenAI systems in Azure and Bing, so the latest Meta collaboration should give business customers greater choice, especially if they’re interested in fine-tuning an AI model to suit more specialist needs.
News
Nano Banana 2 Arrives In MENA For Google Gemini Users
Google brings its latest image model to Gemini and Search, adding 4K output and tighter text control for regional users.
Google has opened access to Nano Banana 2 across the Middle East and North Africa, pushing its newest image model into everyday tools rather than keeping it inside the exclusive (and expensive) Pro tier.
The rollout spans the Google Gemini desktop and mobile apps, and extends to Google Search through Lens and AI Mode. Developers can also test it in preview via AI Studio and the Gemini API.
Nano Banana 2 runs on Gemini Flash, Google’s fast inference layer. The focus is speed, but also control. Users can export visuals from 512px up to 4K, adjusting aspect ratios for everything from vertical social posts to widescreen displays.
The model maintains character likeness across up to five figures and preserves fidelity for as many as 14 objects within a single workflow. This enables visual continuity across scenes, iterations, or edits — supporting projects like short films, storyboards, and multi-scene narratives. Text rendering has also been improved, delivering legible typography in mockups and greeting cards, with built-in translation and localization directly within images.
Also Read: RØDE Adds Direct iPhone Pairing To Wireless GO And Pro Mics
Under the hood, the system taps Gemini’s broader knowledge base and pulls in real-time information and imagery from web search to render specific subjects more accurately. Lighting and fine detail have been upgraded, without slowing output.
By embedding the model inside Gemini and Search, Google is normalizing advanced image generation for a mass audience. In MENA, where startups and marketing teams are leaning heavily on AI to scale content across languages and borders, that shift lands at a practical moment.
The move also folds creative tooling deeper into search itself, so that image generation is no longer a separate workflow. It now sits right next to the query box.
