Connect with us

News

Apple’s California Streaming Event Is Set To Take Place On September 14

The online event will be broadcasted from Apple Park, Apple’s corporate headquarters.

Published

on

apple's california streaming is set to take place on september 14
Apple

After weeks of intense speculation and rumors, Apple has finally announced that its next special event will take place on Tuesday, September 14 at at 1 PM ET. The event is called California Streaming, and it will be broadcasted from Apple Park, the corporate headquarters of Apple.

The event invitation page shows a glowing Apple logo floating over a lake against a darkening sky. The logo hides a clever Easter egg that you can reveal by tapping it on an ARKit-compatible iOS device. When you do that, the ARKit viewer pops up, rendering an augmented reality version of the logo over your surroundings. You can then zoom into the logo and enter the image on the invitation page. Pretty cool stuff!

iPhone 13 Will Be The Star Of The Show

It’s no secret that Apple will introduce a new iPhone at this year’s fall event. Apple’s ‌iPhone 13‌ models (a 5.4-inch ‌iPhone 13‌ mini, a 6.1-inch ‌iPhone 13‌, a 6.1-inch ‌iPhone 13‌ Pro, and a 6.7-inch ‌iPhone 13‌ Pro Max) are expected to be very similar to the iPhone 12 models that were released last year.

The biggest change will likely be the 120Hz ProMotion display, but Apple will almost certainly reserve it for the Pro models. Besides a high refresh rate screen, Apple customers can look forward to a smaller notch, the A15 chip, faster 5G technology, and improved cameras.

More Announcements To Look Forward To

Besides the refresh of the entire iPhone lineup, the California Streaming event is expected to introduce the first redesign of the Apple Watch in years. Thanks to a new lamination technique, the Apple Watch Series 7 will bring the display closer to the cover glass, making it look even more stunning than before.

The AirPods 3 have reportedly been in mass production since August, so the event provides the perfect opportunity for their introduction. Their design is rumored to be much closer to the AirPods Pro, and they may get active noise cancellation to make the redesign feel more justified.

Advertisement

📢 Get Exclusive Monthly Articles, Updates & Tech Tips Right In Your Inbox!

JOIN 23K+ SUBSCRIBERS

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

News

Nano Banana 2 Arrives In MENA For Google Gemini Users

Google brings its latest image model to Gemini and Search, adding 4K output and tighter text control for regional users.

Published

on

nano banana 2 arrives in mena for google gemini users
Google

Google has opened access to Nano Banana 2 across the Middle East and North Africa, pushing its newest image model into everyday tools rather than keeping it inside the exclusive (and expensive) Pro tier.

The rollout spans the Google Gemini desktop and mobile apps, and extends to Google Search through Lens and AI Mode. Developers can also test it in preview via AI Studio and the Gemini API.

Nano Banana 2 runs on Gemini Flash, Google’s fast inference layer. The focus is speed, but also control. Users can export visuals from 512px up to 4K, adjusting aspect ratios for everything from vertical social posts to widescreen displays.

The model maintains character likeness across up to five figures and preserves fidelity for as many as 14 objects within a single workflow. This enables visual continuity across scenes, iterations, or edits — supporting projects like short films, storyboards, and multi-scene narratives. Text rendering has also been improved, delivering legible typography in mockups and greeting cards, with built-in translation and localization directly within images.

Also Read: RØDE Adds Direct iPhone Pairing To Wireless GO And Pro Mics

Under the hood, the system taps Gemini’s broader knowledge base and pulls in real-time information and imagery from web search to render specific subjects more accurately. Lighting and fine detail have been upgraded, without slowing output.

By embedding the model inside Gemini and Search, Google is normalizing advanced image generation for a mass audience. In MENA, where startups and marketing teams are leaning heavily on AI to scale content across languages and borders, that shift lands at a practical moment.

The move also folds creative tooling deeper into search itself, so that image generation is no longer a separate workflow. It now sits right next to the query box.

Continue Reading

#Trending