Connect with us

News

OpenAI Establishes Five-Level System To Gauge AI Progress

The ChatGPT creator revealed the new classification system to employees during a recent company-wide meeting.

Published

on

openai establishes five-level system to gauge ai progress

OpenAI has introduced a five-tier framework to monitor its advancement toward developing artificial intelligence that can rival and even surpass human capabilities.

The initiative is the latest in the startup’s efforts to enhance public understanding of AI safety and was shared with staff during a company-wide meeting on Tuesday, July 9. OpenAI intends to present the levels to investors and other stakeholders, which span from conversational AI (Level 1) to AI that can independently operate an entire organization (Level 5).

During the meeting, OpenAI executives informed employees that the company is currently at the first level but is nearing the second level, known as Reasoners. This tier represents AI systems capable of basic problem-solving tasks comparable to a human with a doctorate-level education.

In the same session, OpenAI’s leadership demonstrated a research project featuring the GPT-4 AI model, showcasing new skills indicative of human-like reasoning. For years, the company has been working towards achieving what is often referred to as artificial general intelligence (AGI), which entails creating computers that can outperform humans in most tasks. Such systems do not yet exist, though OpenAI CEO Sam Altman has previously suggested that AGI might be achievable later this decade.

Also Read: The Most AI-Proof Career Opportunities In The Middle East

Determining the criteria for AGI has been a topic of ongoing debate among AI researchers. In a paper published in November 2023, researchers at Google DeepMind proposed a framework of five ascending AI levels, including “expert” and “superhuman”, which resembles the classification system used in the automotive industry for self-driving cars.

According to OpenAI’s proposed levels, the third tier on the road to AGI is called Agents, representing AI systems that can perform tasks autonomously over several days. Level 4 describes AI that can generate new innovations, while the highest level, Organizations, refers to AI capable of managing entire enterprises.

The framework, developed by OpenAI executives and senior leaders, is considered a work in progress. The company plans to collect feedback from employees, investors, and its board, with the possibility of refining the levels over time.

Advertisement

📢 Get Exclusive Monthly Articles, Updates & Tech Tips Right In Your Inbox!

JOIN 23K+ SUBSCRIBERS

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

News

Nano Banana 2 Arrives In MENA For Google Gemini Users

Google brings its latest image model to Gemini and Search, adding 4K output and tighter text control for regional users.

Published

on

nano banana 2 arrives in mena for google gemini users
Google

Google has opened access to Nano Banana 2 across the Middle East and North Africa, pushing its newest image model into everyday tools rather than keeping it inside the exclusive (and expensive) Pro tier.

The rollout spans the Google Gemini desktop and mobile apps, and extends to Google Search through Lens and AI Mode. Developers can also test it in preview via AI Studio and the Gemini API.

Nano Banana 2 runs on Gemini Flash, Google’s fast inference layer. The focus is speed, but also control. Users can export visuals from 512px up to 4K, adjusting aspect ratios for everything from vertical social posts to widescreen displays.

The model maintains character likeness across up to five figures and preserves fidelity for as many as 14 objects within a single workflow. This enables visual continuity across scenes, iterations, or edits — supporting projects like short films, storyboards, and multi-scene narratives. Text rendering has also been improved, delivering legible typography in mockups and greeting cards, with built-in translation and localization directly within images.

Also Read: RØDE Adds Direct iPhone Pairing To Wireless GO And Pro Mics

Under the hood, the system taps Gemini’s broader knowledge base and pulls in real-time information and imagery from web search to render specific subjects more accurately. Lighting and fine detail have been upgraded, without slowing output.

By embedding the model inside Gemini and Search, Google is normalizing advanced image generation for a mass audience. In MENA, where startups and marketing teams are leaning heavily on AI to scale content across languages and borders, that shift lands at a practical moment.

The move also folds creative tooling deeper into search itself, so that image generation is no longer a separate workflow. It now sits right next to the query box.

Continue Reading

#Trending