News
Volvo And Aurora Announce Their First Self-Driving Truck
The new autonomous goods vehicle was revealed at the ACT Expo in Las Vegas.
Vehicle maker Volvo and self-driving specialist Aurora have revealed their first production truck with full autonomous capabilities, after first announcing a partnership three years ago. The companies showed off the product of their collaboration, known as the Volvo VNL Autonomous truck, at the ACT Expo in Las Vegas.
The truck, which will be manufactured by Volvo, uses Aurora’s self-driving platform, known as Aurora Driver. The system uses multiple high-resolution cameras, LiDAR sensors and imaging radars, and can detect objects up to 400 meters away.
Aurora’s platform has already been driven billions of miles in training simulations, and around 1.5 million miles on real public highways. As well as a wide range of imaging and sensing technologies, the truck will also feature redundant steering, braking, communication, computation, power management, energy storage and vehicle motion management systems, ensuring it can operate safely alongside other road users.
Also Read: NETGEAR’s Orbi 970 Routers Offer Powerful Wi-Fi 7 Connectivity
When the first 20 Aurora autonomous trucks make their debut in North America next month, they will still be overseen by human drivers until testing is complete. Aurora intends to deploy trucks between Dallas and Houston in the near future, but it’s unclear whether the fleet will consist of Volvo machinery or vehicles from another partner.
Volvo announced at the Las Vegas event that it has already begun manufacturing a test fleet of the VNL Autonomous trucks at its New River Valley factory in Virginia. Nils Jaeger, President of Volvo Autonomous Solutions, explained that the truck was the “first of [the company’s] standardized global autonomous technology platform,” and added that it would enable Volvo “to introduce additional models in the future”.
News
Nano Banana 2 Arrives In MENA For Google Gemini Users
Google brings its latest image model to Gemini and Search, adding 4K output and tighter text control for regional users.
Google has opened access to Nano Banana 2 across the Middle East and North Africa, pushing its newest image model into everyday tools rather than keeping it inside the exclusive (and expensive) Pro tier.
The rollout spans the Google Gemini desktop and mobile apps, and extends to Google Search through Lens and AI Mode. Developers can also test it in preview via AI Studio and the Gemini API.
Nano Banana 2 runs on Gemini Flash, Google’s fast inference layer. The focus is speed, but also control. Users can export visuals from 512px up to 4K, adjusting aspect ratios for everything from vertical social posts to widescreen displays.
The model maintains character likeness across up to five figures and preserves fidelity for as many as 14 objects within a single workflow. This enables visual continuity across scenes, iterations, or edits — supporting projects like short films, storyboards, and multi-scene narratives. Text rendering has also been improved, delivering legible typography in mockups and greeting cards, with built-in translation and localization directly within images.
Also Read: RØDE Adds Direct iPhone Pairing To Wireless GO And Pro Mics
Under the hood, the system taps Gemini’s broader knowledge base and pulls in real-time information and imagery from web search to render specific subjects more accurately. Lighting and fine detail have been upgraded, without slowing output.
By embedding the model inside Gemini and Search, Google is normalizing advanced image generation for a mass audience. In MENA, where startups and marketing teams are leaning heavily on AI to scale content across languages and borders, that shift lands at a practical moment.
The move also folds creative tooling deeper into search itself, so that image generation is no longer a separate workflow. It now sits right next to the query box.
