Connect with us

News

Everdome Announces First-Ever Metaverse Soundtrack

The interplanetary metaverse project has revealed the opening single from the world’s first metaverse-specific soundtrack.

Published

on

everdome announces first-ever metaverse soundtrack
Everdome

Created in tandem with music composer Wojciech Urbański, a single called “Machine Phoenix” is being released today which forms part of the first-ever metaverse-specific soundtrack.

The project is called the “Everdome Original Metaverse Soundtrack”. It will give a unique atmosphere and background ambiance to Everdome’s hyper-realistic landscape, which is based on a future Mars settlement, using blockchain and VR technology.

Wojciech Urbański is a gifted composer with a portfolio of award-winning tracks that have been used by the likes of Netflix and Canal+. Wojciech’s Spotify community currently comprises over 600,000 monthly listeners, while overall, his tracks have been played more than 50 million times. The Everdome project hopes to use the composer’s talents to add drama to its hyper-realistic storytelling and visuals.

“Creating a musical illustration of the cosmos and space exploration is probably a dream held by every composer. This collaboration is for me a great artistic opportunity, as the setting of an immersive metaverse world permits the use of a very wide range of sounds and tools,” says Wojciech Urbański.

Metaverse users will hear Wojciech’s work as they launch from Everdome’s virtual Hatta spaceport to the Mars Cycler, situated in Earth’s lower orbit, with the musical collaboration evoking a similar atmosphere to Harry Gregson-Williams’ soundtrack for “The Martian” or Hans Zimmer’s “Dune”.

Also Read: Bedu Has Built A Metaverse Of The UAE’s Planned Mars Trip

As well as partnering with Wojciech Urbański, Everdome is also utilizing the talents of Michał Fojcik, a sound supervisor and designer who has worked on blockbusters including X-Men: Dark Phoenix and Solo. A Star Wars Story.

“Building a sonic background for a metaverse experience is a uniquely challenging task. In the absence of true forms of touch or smell, the visual and the sonic take on huge levels of importance if the experience is to be considered truly hyper-realistic,” says Michał Fojcik, sound supervisor and advisor.

Both artists will be dedicated to the creation of a sonic soundscape that will significantly enhance the quality of Everdome’s storytelling, and you can hear the first single from their collaboration, “Machine Phoenix”, on all major music distribution platforms, including Spotify, Anghami, and YouTube from today.

Advertisement

📢 Get Exclusive Monthly Articles, Updates & Tech Tips Right In Your Inbox!

JOIN 23K+ SUBSCRIBERS

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

News

Nano Banana 2 Arrives In MENA For Google Gemini Users

Google brings its latest image model to Gemini and Search, adding 4K output and tighter text control for regional users.

Published

on

nano banana 2 arrives in mena for google gemini users
Google

Google has opened access to Nano Banana 2 across the Middle East and North Africa, pushing its newest image model into everyday tools rather than keeping it inside the exclusive (and expensive) Pro tier.

The rollout spans the Google Gemini desktop and mobile apps, and extends to Google Search through Lens and AI Mode. Developers can also test it in preview via AI Studio and the Gemini API.

Nano Banana 2 runs on Gemini Flash, Google’s fast inference layer. The focus is speed, but also control. Users can export visuals from 512px up to 4K, adjusting aspect ratios for everything from vertical social posts to widescreen displays.

The model maintains character likeness across up to five figures and preserves fidelity for as many as 14 objects within a single workflow. This enables visual continuity across scenes, iterations, or edits — supporting projects like short films, storyboards, and multi-scene narratives. Text rendering has also been improved, delivering legible typography in mockups and greeting cards, with built-in translation and localization directly within images.

Also Read: RØDE Adds Direct iPhone Pairing To Wireless GO And Pro Mics

Under the hood, the system taps Gemini’s broader knowledge base and pulls in real-time information and imagery from web search to render specific subjects more accurately. Lighting and fine detail have been upgraded, without slowing output.

By embedding the model inside Gemini and Search, Google is normalizing advanced image generation for a mass audience. In MENA, where startups and marketing teams are leaning heavily on AI to scale content across languages and borders, that shift lands at a practical moment.

The move also folds creative tooling deeper into search itself, so that image generation is no longer a separate workflow. It now sits right next to the query box.

Continue Reading

#Trending