News
Samsung Reveals AI Camera Overhaul Ahead Of S26 Launch
Natural-language photo edits and AI composites will feature in the Galaxy camera app, in preparation for the launch of the Galaxy S26 next week.
Samsung is baking generative AI directly into its Galaxy camera app, folding advanced editing tools into the core shooting interface ahead of next week’s Galaxy S26 reveal at Galaxy Unpacked.
The update from the Korean tech giant centers on natural-language editing. Users will be able to type or speak commands to alter images inside the native camera environment — no exporting, and no third-party apps will be required.
Among the features Samsung previewed was a way to change the time of day in a photo from bright afternoon to night; restoring missing elements, such as filling in a bite taken out of food; and merging subjects from multiple images into one composite frame. The upgrades mean that tasks once requiring desktop software and considerable design skills will soon be available as basic in-camera tools.
Samsung says the system is built on what it calls its “brightest Galaxy camera system ever,” tying the AI layer to upgraded hardware in the S26 line. The company describes the result as a “fluid creative process,” combining capture, editing and sharing in a single workflow.
The timing of Samsung’s announcement is deliberate. Smartphone makers are racing to anchor generative AI in daily use, not as a novelty feature but as basic infrastructure. By embedding editing at the point of capture, Samsung is signaling that the camera app — not a standalone AI tool — is where this shift plays out.
Also Read: RØDE Adds Direct iPhone Pairing To Wireless GO And Pro Mics
For the Middle East, where mobile-first creators are beginning to drive everything from retail to short-form video production, frictionless editing carries a huge speed and tech advantage. As Gulf markets double down on digitalization agendas, devices that compress production time increasingly double as genuine business tools.
Samsung will detail hardware specifications and the full software stack when the Galaxy S26 series breaks cover at Unpacked. The camera, clearly, is where the company wants the AI story to land.
News
Nano Banana 2 Arrives In MENA For Google Gemini Users
Google brings its latest image model to Gemini and Search, adding 4K output and tighter text control for regional users.
Google has opened access to Nano Banana 2 across the Middle East and North Africa, pushing its newest image model into everyday tools rather than keeping it inside the exclusive (and expensive) Pro tier.
The rollout spans the Google Gemini desktop and mobile apps, and extends to Google Search through Lens and AI Mode. Developers can also test it in preview via AI Studio and the Gemini API.
Nano Banana 2 runs on Gemini Flash, Google’s fast inference layer. The focus is speed, but also control. Users can export visuals from 512px up to 4K, adjusting aspect ratios for everything from vertical social posts to widescreen displays.
The model maintains character likeness across up to five figures and preserves fidelity for as many as 14 objects within a single workflow. This enables visual continuity across scenes, iterations, or edits — supporting projects like short films, storyboards, and multi-scene narratives. Text rendering has also been improved, delivering legible typography in mockups and greeting cards, with built-in translation and localization directly within images.
Also Read: RØDE Adds Direct iPhone Pairing To Wireless GO And Pro Mics
Under the hood, the system taps Gemini’s broader knowledge base and pulls in real-time information and imagery from web search to render specific subjects more accurately. Lighting and fine detail have been upgraded, without slowing output.
By embedding the model inside Gemini and Search, Google is normalizing advanced image generation for a mass audience. In MENA, where startups and marketing teams are leaning heavily on AI to scale content across languages and borders, that shift lands at a practical moment.
The move also folds creative tooling deeper into search itself, so that image generation is no longer a separate workflow. It now sits right next to the query box.
