News
Virgin Hyperloop Concept Video Provides A Peek At The Future Of Transportation
If the idea of traveling at speeds up to 670 mph while sitting inside a futuristic pod propelled by strong electromagnets through an airless tube sounds like a cool sci-fi concept to you, then you should watch the latest video published by Virgin Hyperloop.
The video shows how the Hyperloop concept, which was first proposed in 2013 by Elon Musk, might enable a faster, greener, and more cost-efficient mode of travel in a not-so-distant future — at least in the United Arab Emirates.
Unlike many existing train stations, the one from which the passengers in the video board their pods is clean, bright, and inviting. The pods themselves echo the same optimistic vision of the future, where traditional materials and high-end technology work in unison to create a more pleasant transportation experience.
Every passenger seat is equipped with wireless charging, and translucent LCD screens that double as dividers between individual rows of seats show the remaining travel time and current speed. Smaller info displays inform passengers about Wi-Fi and toilet availability, both of which are guaranteed to come in handy during longer trips.
When will first passengers be able to enjoy this exciting new mode of transportation? Sometime in 2030, most likely. Virgin Hyperloop, which receives financial backing from Dubai’s regime-backed DP World, must first receive its safety certification before it is allowed to operate in the United Arab Emirates.
So far, Virgin Hyperloop has successfully completed its first passenger test, during which the pod accelerated to around 100 mph. That’s a fairly impressive speed, but there’s no denying that the company has a long way to go before it hits 670 mph.
Also Read: Netflix Is Introducing Sleep Timer Functionality On Android
When it does, its Hyperloop system could be a game-changer for all people who commute long-distances on a regular basis. Jay Walder, CEO of Virgin Hyperloop, said that the company’s Hyperloop system must be affordable for people to use. As such, prices should be much closer to driving than flying thanks to the fact that multiple pods (each carrying up to 28 passengers) can travel inside the same tube mere milliseconds apart.
News
Nano Banana 2 Arrives In MENA For Google Gemini Users
Google brings its latest image model to Gemini and Search, adding 4K output and tighter text control for regional users.
Google has opened access to Nano Banana 2 across the Middle East and North Africa, pushing its newest image model into everyday tools rather than keeping it inside the exclusive (and expensive) Pro tier.
The rollout spans the Google Gemini desktop and mobile apps, and extends to Google Search through Lens and AI Mode. Developers can also test it in preview via AI Studio and the Gemini API.
Nano Banana 2 runs on Gemini Flash, Google’s fast inference layer. The focus is speed, but also control. Users can export visuals from 512px up to 4K, adjusting aspect ratios for everything from vertical social posts to widescreen displays.
The model maintains character likeness across up to five figures and preserves fidelity for as many as 14 objects within a single workflow. This enables visual continuity across scenes, iterations, or edits — supporting projects like short films, storyboards, and multi-scene narratives. Text rendering has also been improved, delivering legible typography in mockups and greeting cards, with built-in translation and localization directly within images.
Also Read: RØDE Adds Direct iPhone Pairing To Wireless GO And Pro Mics
Under the hood, the system taps Gemini’s broader knowledge base and pulls in real-time information and imagery from web search to render specific subjects more accurately. Lighting and fine detail have been upgraded, without slowing output.
By embedding the model inside Gemini and Search, Google is normalizing advanced image generation for a mass audience. In MENA, where startups and marketing teams are leaning heavily on AI to scale content across languages and borders, that shift lands at a practical moment.
The move also folds creative tooling deeper into search itself, so that image generation is no longer a separate workflow. It now sits right next to the query box.
