News
Dorsey-Backed diVine Brings Back Vine’s Looping Videos
The reboot pulls 100,000-plus clips from a salvaged archive and adds strict checks to block AI-made posts.
diVine has gone live with a rebuilt trove of classic Vine loops and fresh funding from Jack Dorsey. The app restores more than 100,000 six-second videos from the Vine archive and reopens a format that disappeared when Twitter shut Vine down in 2016.
The recovery almost didn’t happen. Archive Team volunteers scraped the site ahead of its closure but stored the material in huge binary dumps that were effectively unusable. Evan Henshaw-Plath (an early Twitter engineer who’s now working with Dorsey’s new nonprofit and Other Stuff) spent months cracking those files and stitching user data back together. He says the result captures most of Vine’s best-known clips, though millions of niche posts were never archived.
Creators retain their copyrights. They can request takedowns or reclaim profiles by proving control of the accounts linked in their old bios. Once verified, they can upload missing videos or post new ones.
diVine isn’t pitching nostalgia alone. The app lets users shoot fresh six-second loops but runs each upload through checks from the Guardian Project to confirm a clip was recorded on a real phone. Suspected AI content is blocked. That stance stands out as generative video races across major social platforms.
The service runs on Nostr, the decentralized protocol Dorsey has pushed as an alternative to corporate-controlled feeds. “Nostr — the underlying open source protocol being used by diVine — is empowering developers to create a new generation of apps without the need for VC-backing, toxic business models or huge teams of engineers,” Dorsey said.
Also Read: RØDE Shrinks Its All-In-One Studio Console With RØDECaster Video S
Meanwhile, Henshaw-Plath sees a simple demand: spaces where the feed is human. “Yes, people engage with [AI] … but we also want agency over our lives and over our social experiences,” he said.
For users in the Middle East and elsewhere watching automated content flood their timelines, diVine marks a return to a lean format that once defined early mobile video — now rebuilt on open tech and a bet that authenticity still matters.
News
Nano Banana 2 Arrives In MENA For Google Gemini Users
Google brings its latest image model to Gemini and Search, adding 4K output and tighter text control for regional users.
Google has opened access to Nano Banana 2 across the Middle East and North Africa, pushing its newest image model into everyday tools rather than keeping it inside the exclusive (and expensive) Pro tier.
The rollout spans the Google Gemini desktop and mobile apps, and extends to Google Search through Lens and AI Mode. Developers can also test it in preview via AI Studio and the Gemini API.
Nano Banana 2 runs on Gemini Flash, Google’s fast inference layer. The focus is speed, but also control. Users can export visuals from 512px up to 4K, adjusting aspect ratios for everything from vertical social posts to widescreen displays.
The model maintains character likeness across up to five figures and preserves fidelity for as many as 14 objects within a single workflow. This enables visual continuity across scenes, iterations, or edits — supporting projects like short films, storyboards, and multi-scene narratives. Text rendering has also been improved, delivering legible typography in mockups and greeting cards, with built-in translation and localization directly within images.
Also Read: RØDE Adds Direct iPhone Pairing To Wireless GO And Pro Mics
Under the hood, the system taps Gemini’s broader knowledge base and pulls in real-time information and imagery from web search to render specific subjects more accurately. Lighting and fine detail have been upgraded, without slowing output.
By embedding the model inside Gemini and Search, Google is normalizing advanced image generation for a mass audience. In MENA, where startups and marketing teams are leaning heavily on AI to scale content across languages and borders, that shift lands at a practical moment.
The move also folds creative tooling deeper into search itself, so that image generation is no longer a separate workflow. It now sits right next to the query box.
