News
Greek F-16 Fighter Jets Intercept Beirut-Bound MEA Flight
The Lebanese pilot is thought to have used an incorrect radio frequency — a major blunder from the son of the airline’s chairman.
A “Code Renegade” alert recently set Greek aviation authorities on high alert following a warning issued by a NATO air control center in Torrejón, Spain. Code Renegade is a distress signal typically used in a hijacking situation. In this case, the code was issued after a Middle Eastern Airlines (MEA) flight bound for Beirut failed to answer calls and went into complete radio silence.
After repeated attempts to speak to the aircraft’s captain, authorities began to worry about the plane’s safety status, which eventually caused Greek air defense to scramble two F-16 fighter jets from Souda to intercept the unresponsive civilian airliner over Argolida in the northeastern Peloponnese.

Lebanon-based aircraft tracker InterSky, took to Twitter to report the details of the unfolding situation:
“Code Renegade set Greek authorities on alert following a relevant signal by the NATO air control center in Spain (CAOC Torrejón), to intercept a non-responsive civil aircraft Airbus A321 with 145 passengers onboard that had taken off from Madrid and was bound for Beirut.”
Also Read: Emirates Airline To Invest $2 Billion On Major Upgrades
In a further twist to the story, contact was eventually reestablished with the aircraft, after which it was revealed that the MEA pilot, Abed Al-Hout, was the son of Mohammed Al-Hout, chairman of the board of directors of Middle East Airlines. The chairman has previously received criticism for employing relatives at various levels of the company, and in this instance, his son had failed to set the communication instruments to the correct frequency, resulting in radio silence.
The news is a further embarrassing blow for Middle East Airlines, which has recently lost over 20% of staff to other airlines, as Lebanon’s financial crisis continues to deepen.
News
OpenAI’s ChatGPT Health Is A Private Space For Health Data
A new health mode lets the popular AI platform tap medical records and fitness apps while walling off sensitive information.
OpenAI has created ChatGPT Health, a separate space inside its chatbot platform for handling medical and wellness data. The opt-in feature starts with a small US cohort before widening out.
Health-related questions have long driven traffic to AI tools. OpenAI says over 230 million people ask ChatGPT about health or insurance each week. The new mode adds personal context to that behavior but stops short of diagnosis or treatment advice.
Users can connect records from participating US providers through b.well and link apps such as Apple Health, MyFitnessPal, Function and Weight Watchers. Some links are US-only, while Apple Health needs iOS. Once connected, ChatGPT can surface patterns in labs, summarize information ahead of a clinic visit or help map diet and exercise choices against past data.
The data sits apart from other chat information. Health has its own memories and does not spill into other conversations. Users can view or delete health memories at any time. OpenAI says this material is not used to train its models.
Security is much heavier in this section too. Health adds isolation and purpose-built encryption on top of the platform’s baseline protections. App connections require explicit permission, and disconnecting cuts the feed immediately.
“ChatGPT Health is another step toward turning ChatGPT into a personal super-assistant that can support you with information and tools to achieve your goals across any part of your life,” wrote Fidji Simo, OpenAI’s applications chief.
Also Read: Deliverect Rolls Out Self-Order Kiosks Across MENA
Physicians had input during development, though OpenAI has not detailed how that shaped the end product. The launch follows Health Bench, a dataset released in May to test models on realistic medical cases.
While currently rooted in the US healthcare ecosystem, the approach may draw interest in the Gulf and wider MENA markets as governments push digital health records and patient portals under modernization programs. Adoption will depend on whether users trust an AI assistant with such personal material and whether it fits clinical routines.
For OpenAI, the move marks a cautious step into regulated terrain and signals a shift toward sector-specific uses of generative AI.
