News
ChatGPT Offers API Access & Developers Are Taking Advantage
Businesses can now develop paid services using the popular AI language model, meaning chatbots will soon be appearing everywhere.
On March 1, OpenAI, the San Francisco artificial intelligence company, released API access to their insanely popular ChatGPT tool, along with Whisper, a speech recognition service.
Since the release of ChatGPT, developers have been using the platform to build all manner of custom tools, including apps like QuickVid AI, which automatically generates ideas for YouTube videos. The app’s creator, Daniel Habib, explained that until now, it was impossible to monetize software featuring chatbot AI.
“All of these unofficial tools that were just toys, essentially, that would live in your own personal sandbox can now actually go out to tons of users,” Habib says.
OpenAI’s API release could mark the start of a new AI gold rush. What was previously a series of industrious hobbyists creating apps in a licensing gray area could soon become an entirely new industry.
“What this release means for companies is that adding AI capabilities to applications is much more accessible and affordable,” notes Hassan El Mghari, who manages TwitterBio, a ChatGPT service that generates Twitter profile text for users.
Also Read: Areeba To Bring Biometric Payment Authentication To MENA
OpenAI has also updated its data retention policy and will now only hold user data for 30 days, promising it won’t use user-generated text inputs to train its AI models. This policy change means that companies will be in better control of their data rather than needing to trust a third party to manage where it goes.
In addition to better data-retention policies, API access to ChatGPT is now 10 times cheaper than OpenAI’s lower-powered GPT3 API, which launched in June 2020. The falling price of many of these large language models means there will likely be a plethora of AI chatbots to choose from in the near future.
News
Nano Banana 2 Arrives In MENA For Google Gemini Users
Google brings its latest image model to Gemini and Search, adding 4K output and tighter text control for regional users.
Google has opened access to Nano Banana 2 across the Middle East and North Africa, pushing its newest image model into everyday tools rather than keeping it inside the exclusive (and expensive) Pro tier.
The rollout spans the Google Gemini desktop and mobile apps, and extends to Google Search through Lens and AI Mode. Developers can also test it in preview via AI Studio and the Gemini API.
Nano Banana 2 runs on Gemini Flash, Google’s fast inference layer. The focus is speed, but also control. Users can export visuals from 512px up to 4K, adjusting aspect ratios for everything from vertical social posts to widescreen displays.
The model maintains character likeness across up to five figures and preserves fidelity for as many as 14 objects within a single workflow. This enables visual continuity across scenes, iterations, or edits — supporting projects like short films, storyboards, and multi-scene narratives. Text rendering has also been improved, delivering legible typography in mockups and greeting cards, with built-in translation and localization directly within images.
Also Read: RØDE Adds Direct iPhone Pairing To Wireless GO And Pro Mics
Under the hood, the system taps Gemini’s broader knowledge base and pulls in real-time information and imagery from web search to render specific subjects more accurately. Lighting and fine detail have been upgraded, without slowing output.
By embedding the model inside Gemini and Search, Google is normalizing advanced image generation for a mass audience. In MENA, where startups and marketing teams are leaning heavily on AI to scale content across languages and borders, that shift lands at a practical moment.
The move also folds creative tooling deeper into search itself, so that image generation is no longer a separate workflow. It now sits right next to the query box.
