News
Payment Experts Payrails Team With Ride Hailing App inDrive
The partnership solves the complexities of integrating various regional payment service providers into the global mobility service.
Payment platform Payrails has announced a strategic collaboration with inDrive, the world’s second-most-downloaded ride-hailing app. The partnership is already helping the California-based inDrive overcome the complexities of integrating various payment service providers and alternative payment methods across different regions, including MENA.
Payrails’ solution is provider agnostic, and features dynamic payment routing, and extended coverage of alternative (regional) payment methods. The technology is complex, but the main takeaway is that inDrive has already seen an 11% increase in card approval rates and significantly boosted conversions, enhancing profits for inDrive and improving income security for its workers.
Payrails also delivers a streamlined experience for inDrive’s developers by offering a single API (application programming interface) eliminating the need for multiple individual integrations with different regional payment providers and payment methods.
Vasiliy Everstov, Head of Fintech at inDrive, expressed enthusiasm about the results, stating: “Partnering with Payrails empowers us to activate all of our key payment objectives for the business across our top priority markets. Together, we are driving tech innovation in the mobility industry and unlocking new opportunities for drivers and consumers alike”.
Also Read: A Guide To Digital Payment Methods In The Middle East
Orkhan Abdullayev, Co-Founder and CEO of Payrails echoed this sentiment, remarking, “Collaborating with inDrive to revolutionize global payment processing underscores our commitment to providing innovative solutions in the MENA region and beyond that address the evolving needs of large enterprises. Our aim is to set new benchmarks of excellence for payment solutions across key industries worldwide”.
Through the collaboration, inDrive can now access and automate the optimal payment routes in each market at the lowest cost, driving efficiency and profitability in a highly competitive sector.
News
Nano Banana 2 Arrives In MENA For Google Gemini Users
Google brings its latest image model to Gemini and Search, adding 4K output and tighter text control for regional users.
Google has opened access to Nano Banana 2 across the Middle East and North Africa, pushing its newest image model into everyday tools rather than keeping it inside the exclusive (and expensive) Pro tier.
The rollout spans the Google Gemini desktop and mobile apps, and extends to Google Search through Lens and AI Mode. Developers can also test it in preview via AI Studio and the Gemini API.
Nano Banana 2 runs on Gemini Flash, Google’s fast inference layer. The focus is speed, but also control. Users can export visuals from 512px up to 4K, adjusting aspect ratios for everything from vertical social posts to widescreen displays.
The model maintains character likeness across up to five figures and preserves fidelity for as many as 14 objects within a single workflow. This enables visual continuity across scenes, iterations, or edits — supporting projects like short films, storyboards, and multi-scene narratives. Text rendering has also been improved, delivering legible typography in mockups and greeting cards, with built-in translation and localization directly within images.
Also Read: RØDE Adds Direct iPhone Pairing To Wireless GO And Pro Mics
Under the hood, the system taps Gemini’s broader knowledge base and pulls in real-time information and imagery from web search to render specific subjects more accurately. Lighting and fine detail have been upgraded, without slowing output.
By embedding the model inside Gemini and Search, Google is normalizing advanced image generation for a mass audience. In MENA, where startups and marketing teams are leaning heavily on AI to scale content across languages and borders, that shift lands at a practical moment.
The move also folds creative tooling deeper into search itself, so that image generation is no longer a separate workflow. It now sits right next to the query box.
