News
Samsung Could Reveal New Galaxy S23 On February 1st
The tech giant’s Colombian website has inadvertently revealed the date of an upcoming Galaxy Unpacked event.
Samsung may have unintentionally confirmed that it will unveil next-generation flagship phones at the beginning of February, according to news from 9to5Google. The company’s Colombian website recently published a page that mentioned a Galaxy Unpacked event scheduled for February 1st, 2023, using the tagline “Epic Moments are Approaching” with the camera setup of the Galaxy S23 Ultra clearly showing.
Although several sources took screenshots of the page, the story is no longer viewable on the Columbian website. The announcement didn’t explicitly mention the upcoming Galaxy S23, but did show the flagship phone’s rumored triple-camera configuration. The teaser page’s color palette is also thought to give a hint towards the Galaxy S23 and Galaxy S23 Ultra’s new colorways of green and lilac.
Previous Unpacked events have also taken place in early February, so this news confirms what we were already pretty certain about. In addition, further rumors suggest that the launch will take place in San Francisco.
Also Read: Nanoleaf Sense+ Control Lighting Handles Automation By Itself
The upcoming smartphones will reportedly ditch Exynos chips in favor of Qualcomm’s Snapdragon 8 Gen 2 SoC for all worldwide markets. The Korean tech giant typically equips USA, Asian and European models with different processors, so the new models will represent something of a departure.
Finally, one of the most hotly anticipated features of the new handsets are their upgraded cameras: The flagship Galaxy S23 will likely feature a huge 200-megapixel main camera, while the base models will still come with a 50-megapixel primary shooter.
News
Nano Banana 2 Arrives In MENA For Google Gemini Users
Google brings its latest image model to Gemini and Search, adding 4K output and tighter text control for regional users.
Google has opened access to Nano Banana 2 across the Middle East and North Africa, pushing its newest image model into everyday tools rather than keeping it inside the exclusive (and expensive) Pro tier.
The rollout spans the Google Gemini desktop and mobile apps, and extends to Google Search through Lens and AI Mode. Developers can also test it in preview via AI Studio and the Gemini API.
Nano Banana 2 runs on Gemini Flash, Google’s fast inference layer. The focus is speed, but also control. Users can export visuals from 512px up to 4K, adjusting aspect ratios for everything from vertical social posts to widescreen displays.
The model maintains character likeness across up to five figures and preserves fidelity for as many as 14 objects within a single workflow. This enables visual continuity across scenes, iterations, or edits — supporting projects like short films, storyboards, and multi-scene narratives. Text rendering has also been improved, delivering legible typography in mockups and greeting cards, with built-in translation and localization directly within images.
Also Read: RØDE Adds Direct iPhone Pairing To Wireless GO And Pro Mics
Under the hood, the system taps Gemini’s broader knowledge base and pulls in real-time information and imagery from web search to render specific subjects more accurately. Lighting and fine detail have been upgraded, without slowing output.
By embedding the model inside Gemini and Search, Google is normalizing advanced image generation for a mass audience. In MENA, where startups and marketing teams are leaning heavily on AI to scale content across languages and borders, that shift lands at a practical moment.
The move also folds creative tooling deeper into search itself, so that image generation is no longer a separate workflow. It now sits right next to the query box.
