OpenAI Launches ChatGPT Health with Isolated, Encrypted Health Data Controls

Artificial intelligence (AI) company OpenAI on Wednesday announced the launch of ChatGPT Health, a dedicated space that allows users to have conversations with the chatbot about their health. To that end, the sandboxed experience offers users the optional ability to securely connect medical records and wellness apps, including Apple Health, Function, MyFitnessPal, Weight Watchers, AllTrails,
[[{“value”:”

Artificial intelligence (AI) company OpenAI on Wednesday announced the launch of ChatGPT Health, a dedicated space that allows users to have conversations with the chatbot about their health.

To that end, the sandboxed experience offers users the optional ability to securely connect medical records and wellness apps, including Apple Health, Function, MyFitnessPal, Weight Watchers, AllTrails, Instacart, and Peloton, to get tailored responses, lab test insights, nutrition advice, personalized meal ideas, and suggested workout classes.

The new feature is rolling out for users with ChatGPT Free, Go, Plus, and Pro plans outside of the European Economic Area, Switzerland, and the U.K.

“ChatGPT Health builds on the strong privacy, security, and data controls across ChatGPT with additional, layered protections designed specifically for health — including purpose-built encryption and isolation to keep health conversations protected and compartmentalized,” OpenAI said in a statement.

Cybersecurity

Stating that over 230 million people globally ask health and wellness-related questions on the platform every week, OpenAI emphasized that the tool is designed to support medical care, not replace it or be used as a substitute for diagnosis or treatment.

The company also highlighted the various privacy and security features built into the Health experience –

  • Health operates in silo with enhanced privacy and its own memory to safeguard sensitive data using “purpose-built” encryption and isolation
  • Conversations in Health are not used to train OpenAI’s foundation models
  • Users who attempt to have a health-related conversation in ChatGPT are prompted to switch over to Health for additional protections
  • Health information and memories is not used to contextualize non-Health chats
  • Conversations outside of Health cannot access files, conversations, or memories created within Health
  • Apps can only connect with users’ health data with their explicit permission, even if they’re already connected to ChatGPT for conversations outside of Health
  • All apps available in Health are required to meet OpenAI’s privacy and security requirements, such as collecting only the minimum data needed, and undergo additional security review for them to be included in Health

Furthermore, OpenAI pointed out that it has evaluated the model that powers Health against clinical standards using HealthBench⁠, a benchmark the company revealed in May 2025 as a way to better measure the capabilities of AI systems for health, putting safety, clarity, and escalation of care in focus.

Cybersecurity

“This evaluation-driven approach helps ensure the model performs well on the tasks people actually need help with, including explaining lab results in accessible language, preparing questions for an appointment, interpreting data from wearables and wellness apps, and summarizing care instructions,” it added.

OpenAI’s announcement follows an investigation from The Guardian that found Google AI Overviews to be providing false and misleading health information. OpenAI and Character.AI are also facing several lawsuits claiming their tools drove people to suicide and harmful delusions after confiding in the chatbot. A report published by SFGate earlier this week detailed how a 19-year-old died of a drug overdose after trusting ChatGPT for medical advice.

Found this article interesting? Follow us on Google News, Twitter and LinkedIn to read more exclusive content we post.

“}]] The Hacker News 

Total
0
Shares
Previous Post

CISA Flags Microsoft Office and HPE OneView Bugs as Actively Exploited

Next Post

Researchers Uncover NodeCordRAT Hidden in npm Bitcoin-Themed Packages

Related Posts

Microsoft Warns of ‘Payroll Pirates’ Hijacking HR SaaS Accounts to Steal Employee Salaries

A threat actor known as Storm-2657 has been observed hijacking employee accounts with the end goal of diverting salary payments to attacker-controlled accounts. "Storm-2657 is actively targeting a range of U.S.-based organizations, particularly employees in sectors like higher education, to gain access to third-party human resources (HR) software as a service (SaaS) platforms like Workday," the
Read More

npm, PyPI, and RubyGems Packages Found Sending Developer Data to Discord Channels

Cybersecurity researchers have identified several malicious packages across npm, Python, and Ruby ecosystems that leverage Discord as a command-and-control (C2) channel to transmit stolen data to actor-controlled webhooks. Webhooks on Discord are a way to post messages to channels in the platform without requiring a bot user or authentication, making them an attractive mechanism for attackers to
Read More

Google Identifies Three New Russian Malware Families Created by COLDRIVER Hackers

A new malware attributed to the Russia-linked hacking group known as COLDRIVER has undergone numerous developmental iterations since May 2025, suggesting an increased "operations tempo" from the threat actor. The findings come from Google Threat Intelligence Group (GTIG), which said the state-sponsored hacking crew has rapidly refined and retooled its malware arsenal merely five days following
Read More