News
OpenAI’s Sora Video AI Is Now Available Globally
The new video generation model was released yesterday, and is now available for ChatGPT Plus and Pro subscribers.
After months of anticipation, OpenAI’s video generation model, Sora, has officially launched for public use. Announced yesterday (Monday, December 10) the tool is accessible to ChatGPT Plus and Pro subscribers in the US and “most other countries” where OpenAI’s chatbot operates. For those eligible, Sora will be available for use starting later today.
The version being released, Sora Turbo, is a step up from the early preview showcased in February. OpenAI highlights its improved speed, though it still has its quirks. The company warns that the AI can struggle with “unrealistic physics” and managing intricate actions over extended durations.
When users visit the Sora landing page, they’ll find a feed of videos generated by other users. Each video includes the original prompt, giving you insights into how the footage was created. From there, you can choose to remix the content, incorporate it into your own project, or re-cut it entirely.
Currently, Sora’s video capabilities are capped at 1080p resolution and a maximum length of 20 seconds. While these limits may feel restrictive to some, they’re in place as OpenAI continues refining the model.
Every Sora video comes with a visible watermark and embedded C2PA metadata, making it easier to verify authenticity. OpenAI is taking a firm stance on safety, prohibiting the creation of criminal content and deepfakes.
Also Read: Top Free AI Chatbots Available In The Middle East
Even if you don’t have a ChatGPT subscription, you can still browse Sora’s website to view content others have created. OpenAI CEO Sam Altman shared during a livestream that Sora’s release in Europe and the UK might take some time due to region-specific regulatory considerations.
Plans And Pricing
ChatGPT Plus subscribers can create up to 50 videos per month at 480p resolution. Alternatively, they can produce fewer, shorter clips in 720p. The Pro plan, on the other hand, offers significantly greater flexibility, including 10 times the usage limits and the ability to generate higher-quality videos of longer durations. OpenAI has also teased that customized pricing for various user needs will roll out early next year.
News
Nano Banana 2 Arrives In MENA For Google Gemini Users
Google brings its latest image model to Gemini and Search, adding 4K output and tighter text control for regional users.
Google has opened access to Nano Banana 2 across the Middle East and North Africa, pushing its newest image model into everyday tools rather than keeping it inside the exclusive (and expensive) Pro tier.
The rollout spans the Google Gemini desktop and mobile apps, and extends to Google Search through Lens and AI Mode. Developers can also test it in preview via AI Studio and the Gemini API.
Nano Banana 2 runs on Gemini Flash, Google’s fast inference layer. The focus is speed, but also control. Users can export visuals from 512px up to 4K, adjusting aspect ratios for everything from vertical social posts to widescreen displays.
The model maintains character likeness across up to five figures and preserves fidelity for as many as 14 objects within a single workflow. This enables visual continuity across scenes, iterations, or edits — supporting projects like short films, storyboards, and multi-scene narratives. Text rendering has also been improved, delivering legible typography in mockups and greeting cards, with built-in translation and localization directly within images.
Also Read: RØDE Adds Direct iPhone Pairing To Wireless GO And Pro Mics
Under the hood, the system taps Gemini’s broader knowledge base and pulls in real-time information and imagery from web search to render specific subjects more accurately. Lighting and fine detail have been upgraded, without slowing output.
By embedding the model inside Gemini and Search, Google is normalizing advanced image generation for a mass audience. In MENA, where startups and marketing teams are leaning heavily on AI to scale content across languages and borders, that shift lands at a practical moment.
The move also folds creative tooling deeper into search itself, so that image generation is no longer a separate workflow. It now sits right next to the query box.
