Anthropic Launches Claude AI for Healthcare with Secure Health Record Access

Anthropic has become the latest Artificial intelligence (AI) company to announce a new suite of features that allows users of its Claude platform to better understand their health information. Under an initiative called Claude for Healthcare, the company said U.S. subscribers of Claude Pro and Max plans can opt to give Claude secure access to their lab results and health records by connecting to

Anthropic has become the latest Artificial intelligence (AI) company to announce a new suite of features that allows users of its Claude platform to better understand their health information.

Under an initiative called Claude for Healthcare, the company said U.S. subscribers of Claude Pro and Max plans can opt to give Claude secure access to their lab results and health records by connecting to HealthEx and Function, with Apple Health and Android Health Connect integrations rolling out later this week via its iOS and Android apps.

“When connected, Claude can summarize users’ medical history, explain test results in plain language, detect patterns across fitness and health metrics, and prepare questions for appointments,” Anthropic said. “The aim is to make patients’ conversations with doctors more productive, and to help users stay well-informed about their health.”

Cybersecurity

The development comes merely days after OpenAI unveiled ChatGPT Health as a dedicated experience for users to securely connect medical records and wellness apps and get personalized responses, lab insights, nutrition advice, and meal ideas.

The company also pointed out that the integrations are private by design, and users can explicitly choose the kind of information they want to share with Claude and disconnect or edit Claude’s permissions at any time. As with OpenAI, the health data is not used to train its models.

The expansion comes amid growing scrutiny over whether AI systems can avoid offering harmful or dangerous guidance. Recently, Google stepped in to remove some of its AI summaries after they were found providing inaccurate health information. Both OpenAI and Anthropic have emphasized that their AI offerings can make mistakes and are not substitutes for professional healthcare advice.

In the Acceptable Use Policy, Anthropic notes that a qualified professional in the field must review the generated outputs “prior to dissemination or finalization” in high-risk use cases related to healthcare decisions, medical diagnosis, patient care, therapy, mental health, or other medical guidance.

“Claude is designed to include contextual disclaimers, acknowledge its uncertainty, and direct users to healthcare professionals for personalized guidance,” Anthropic said.

Found this article interesting? Follow us on Google News, Twitter and LinkedIn to read more exclusive content we post.

 The Hacker News 

Total
0
Shares
Previous Post

Researchers Uncover Service Providers Fueling Industrial-Scale Pig Butchering Fraud

Next Post

GoBruteforcer Botnet Targets Crypto Project Databases by Exploiting Weak Credentials

Related Posts

Researchers Capture Lazarus APT’s Remote-Worker Scheme Live on Camera

A joint investigation led by Mauro Eldritch, founder of BCA LTD, conducted together with threat-intel initiative NorthScan and ANY.RUN, a solution for interactive malware analysis and threat intelligence, has uncovered one of North Korea’s most persistent infiltration schemes: a network of remote IT workers tied to Lazarus Group’s Famous Chollima division. For the first time, researchers managed
Read More

How to Browse the Web More Sustainably With a Green Browser

As the internet becomes an essential part of daily life, its environmental footprint continues to grow.  Data centers, constant connectivity, and resource-heavy browsing habits all contribute to energy consumption and digital waste. While individual users may not see this impact directly, the collective effect of everyday browsing is significant. Choosing a browser designed with
Read More