News
Netflix Is Testing A Way To Stop Its Users From Sharing Their Passwords
It seems that Netflix is no longer fine with users sharing their passwords with other people because the popular video streaming service is testing a new account ownership verification prompt.
“This test is designed to help ensure that people using Netflix accounts are authorized to do so,” said Netflix spokesperson Ebony Turner. Users who see the prompt are asked to verify account ownership by a code, sent via email or text. At the time of writing, the test seems to be rolled out more or less randomly, but that could quickly change in the future.
Netflix, which now has more than 200 million subscribers around the world, said that users who are unable to verify account ownership wouldn’t be able to continue using the service unless they purchase their own subscription.
While this measure is unlikely to stop password sharing among friends and extended family members, who can simply share the required authorization code, but it may at least slow down password sharing on various online forums and dark web sites.
The decision to crack down on password sharing is likely a reaction to the growing competition Netflix is facing, with streaming services such as Amazon Prime Video, HBO Max, Disney Plus, and Hulu offering their own original TV shows and movies.
Also Read: Netflix Is Introducing Sleep Timer Functionality On Android
Back in 2016, Netflix co-founder and chief executive Reed Hasting said that password sharing was something Netflix had to learn to live with because the amount of legitimate password sharing between family members was too high. Even in 2019, chief product officer Greg Peters stated that the streaming service had no plans to change its stance on password sharing.
Right now, Netflix’s terms of service state that the service is intended “for your personal and non-commercial use only and may not be shared with individuals beyond your household.” It’s not really clear whether Netflix means a physical household, so we wouldn’t be surprised to see the company clarify its terms of service if the new account ownership verification prompt becomes a standard feature.
News
OpenAI’s ChatGPT Health Is A Private Space For Health Data
A new health mode lets the popular AI platform tap medical records and fitness apps while walling off sensitive information.
OpenAI has created ChatGPT Health, a separate space inside its chatbot platform for handling medical and wellness data. The opt-in feature starts with a small US cohort before widening out.
Health-related questions have long driven traffic to AI tools. OpenAI says over 230 million people ask ChatGPT about health or insurance each week. The new mode adds personal context to that behavior but stops short of diagnosis or treatment advice.
Users can connect records from participating US providers through b.well and link apps such as Apple Health, MyFitnessPal, Function and Weight Watchers. Some links are US-only, while Apple Health needs iOS. Once connected, ChatGPT can surface patterns in labs, summarize information ahead of a clinic visit or help map diet and exercise choices against past data.
The data sits apart from other chat information. Health has its own memories and does not spill into other conversations. Users can view or delete health memories at any time. OpenAI says this material is not used to train its models.
Security is much heavier in this section too. Health adds isolation and purpose-built encryption on top of the platform’s baseline protections. App connections require explicit permission, and disconnecting cuts the feed immediately.
“ChatGPT Health is another step toward turning ChatGPT into a personal super-assistant that can support you with information and tools to achieve your goals across any part of your life,” wrote Fidji Simo, OpenAI’s applications chief.
Also Read: Deliverect Rolls Out Self-Order Kiosks Across MENA
Physicians had input during development, though OpenAI has not detailed how that shaped the end product. The launch follows Health Bench, a dataset released in May to test models on realistic medical cases.
While currently rooted in the US healthcare ecosystem, the approach may draw interest in the Gulf and wider MENA markets as governments push digital health records and patient portals under modernization programs. Adoption will depend on whether users trust an AI assistant with such personal material and whether it fits clinical routines.
For OpenAI, the move marks a cautious step into regulated terrain and signals a shift toward sector-specific uses of generative AI.
