News
Open Innovation AI Partners With AMD To Advance AI & GPU Tech
The collaboration will integrate AMD Instinct GPUs with Open Innovation AI’s platform to optimize performance across industries.
Open Innovation AI, a prominent provider of AI orchestration solutions, has formed a strategic alliance with AMD, a global leader in high-performance and adaptive computing. The collaboration aims to enhance the development, deployment, and optimization of AI models through the integration of AMD’s Instinct GPUs.
By combining AMD Instinct data center GPUs with Open Innovation AI’s orchestration platform, the partnership plans to provide scalable, efficient, and optimized AI solutions to multiple industries. The collaboration represents a significant advancement in AI and GPU orchestration, offering businesses around the world cutting-edge performance and efficiency.
“Open Innovation AI’s platform introduces new flexibility and performance to AI workloads. With the integration of AMD’s high-performance GPUs, we can deliver unparalleled efficiency and innovation to our customers,” stated Dr. Abed Benaichouche, CEO and Co-Founder of Open Innovation AI.
Also Read: Top Free AI Chatbots Available In The Middle East
“AI is transforming the future of computing, and this partnership plays a critical role in our strategy to offer advanced AI solutions that will drive industry-wide innovation,” added Zaid Ghattas from AMD.
This partnership combines the expertise of both Open Innovation AI and AMD to streamline the entire AI development process — from GPU architecture to end-user applications — ensuring peak efficiency and performance at every phase. Both companies are committed to pushing the frontiers of AI hardware and software, delivering innovative solutions to their customers and partners.
News
Nano Banana 2 Arrives In MENA For Google Gemini Users
Google brings its latest image model to Gemini and Search, adding 4K output and tighter text control for regional users.
Google has opened access to Nano Banana 2 across the Middle East and North Africa, pushing its newest image model into everyday tools rather than keeping it inside the exclusive (and expensive) Pro tier.
The rollout spans the Google Gemini desktop and mobile apps, and extends to Google Search through Lens and AI Mode. Developers can also test it in preview via AI Studio and the Gemini API.
Nano Banana 2 runs on Gemini Flash, Google’s fast inference layer. The focus is speed, but also control. Users can export visuals from 512px up to 4K, adjusting aspect ratios for everything from vertical social posts to widescreen displays.
The model maintains character likeness across up to five figures and preserves fidelity for as many as 14 objects within a single workflow. This enables visual continuity across scenes, iterations, or edits — supporting projects like short films, storyboards, and multi-scene narratives. Text rendering has also been improved, delivering legible typography in mockups and greeting cards, with built-in translation and localization directly within images.
Also Read: RØDE Adds Direct iPhone Pairing To Wireless GO And Pro Mics
Under the hood, the system taps Gemini’s broader knowledge base and pulls in real-time information and imagery from web search to render specific subjects more accurately. Lighting and fine detail have been upgraded, without slowing output.
By embedding the model inside Gemini and Search, Google is normalizing advanced image generation for a mass audience. In MENA, where startups and marketing teams are leaning heavily on AI to scale content across languages and borders, that shift lands at a practical moment.
The move also folds creative tooling deeper into search itself, so that image generation is no longer a separate workflow. It now sits right next to the query box.
