News
NVIDIA’s RTX 50-Series Laptop GPUs Bring Blackwell To Mobile
The company promises improved performance and efficiency, with up to 24GB of GDDR7 RAM for the flagship model.
NVIDIA has unveiled its RTX 50-series (Blackwell) GPUs for laptops alongside the desktop lineup, promising impressive performance gains without compromising efficiency. The next-gen hardware features powerful upgrades, including up to 24GB of GDDR7 memory for flagship model. With retail availability set for March and April, CES 2025 attendees are already getting an advanced look at several laptops featuring the GPUs.
The flagship RTX 5090 laptop GPU boasts a staggering 10,496 CUDA cores across 84 Streaming Multiprocessors — nearly matching the desktop RTX 5080 in raw specs. Equipped with 24GB of VRAM using 3GB GDDR7 modules, it operates on a 256-bit memory interface.
The RTX 5080 laptop GPU steps down to 7,680 CUDA cores (60 SMs) and 16GB of memory, matching its predecessor in capacity. It still delivers solid AI performance at 1,334 TOPS, with a flexible TGP range of 80W to 150W.
Meanwhile, the RTX 5070 series is split into two models: the RTX 5070 Ti laptop GPU with 5,888 CUDA cores and 12GB of memory, and the base RTX 5070 laptop GPU, which drops to 4,608 cores and 8GB of VRAM.
Also Read: Top 10 Best Video Games Set In The Middle East
Efficiency is a focal point for these GPUs. NVIDIA claims that the RTX 5070 laptop GPU, running at 50-100W, can match the performance of the desktop RTX 4090 while using half the power. Blackwell GPUs also bring updates to NVIDIA’s Max-Q technology, designed to optimize power efficiency for laptops. Key advancements include Advanced Power Gating, which shuts down inactive GPU sections, and Low Latency Sleep, allowing the GPU to quickly enter and exit sleep states to save power during light use.
During its CES presentation, NVIDIA revealed the pricing structure for mobile Blackwell GPUs: Partner costs for the RTX 5090, RTX 5080, RTX 5070 Ti, and RTX 5070 are $2,899, $2,199, $1,599, and $1,299, respectively. This pricing reflects what manufacturers pay, so the final cost for laptops featuring these GPUs will be higher.
News
Nano Banana 2 Arrives In MENA For Google Gemini Users
Google brings its latest image model to Gemini and Search, adding 4K output and tighter text control for regional users.
Google has opened access to Nano Banana 2 across the Middle East and North Africa, pushing its newest image model into everyday tools rather than keeping it inside the exclusive (and expensive) Pro tier.
The rollout spans the Google Gemini desktop and mobile apps, and extends to Google Search through Lens and AI Mode. Developers can also test it in preview via AI Studio and the Gemini API.
Nano Banana 2 runs on Gemini Flash, Google’s fast inference layer. The focus is speed, but also control. Users can export visuals from 512px up to 4K, adjusting aspect ratios for everything from vertical social posts to widescreen displays.
The model maintains character likeness across up to five figures and preserves fidelity for as many as 14 objects within a single workflow. This enables visual continuity across scenes, iterations, or edits — supporting projects like short films, storyboards, and multi-scene narratives. Text rendering has also been improved, delivering legible typography in mockups and greeting cards, with built-in translation and localization directly within images.
Also Read: RØDE Adds Direct iPhone Pairing To Wireless GO And Pro Mics
Under the hood, the system taps Gemini’s broader knowledge base and pulls in real-time information and imagery from web search to render specific subjects more accurately. Lighting and fine detail have been upgraded, without slowing output.
By embedding the model inside Gemini and Search, Google is normalizing advanced image generation for a mass audience. In MENA, where startups and marketing teams are leaning heavily on AI to scale content across languages and borders, that shift lands at a practical moment.
The move also folds creative tooling deeper into search itself, so that image generation is no longer a separate workflow. It now sits right next to the query box.
