«Как решить проблему долгих ожиданий: эксперты приветствуют ChatGPT Health для доступа к медицинским консультациям» Translation: How to Address the Issue of Long Wait Times: Experts Welcome ChatGPT Health for Access to Medical Consultations

Experts have endorsed the launch of ChatGPT Health for health-related consultations, despite the risks of hallucinations in neural networks. This is reported by TechCrunch.

Sina Bari, a practicing surgeon and head of the AI division at iMerit, shared an experience where his patient consulted ChatGPT:

“Recently, he came to me after I recommended a medication, showing me a printed conversation with the chatbot. It stated that there’s a 45% chance of developing pulmonary artery thrombosis.”

Dr. Bari investigated the sources and discovered that the statistics were taken from a study regarding the drug’s effects on a niche group of people with tuberculosis. These findings were not applicable to his patient’s clinical scenario.

Despite the inaccuracies, the physician positively evaluated the launch of ChatGPT Health. He believes the service allows individuals to discuss health issues in a more private setting.

“I think it’s great. It’s something that’s already happening, so formalizing the process to protect patient information and implementing safety measures will make it more efficient for patients,” commented Dr. Bari.

Users can receive more personalized advice from ChatGPT Health by uploading medical records and syncing the app with Apple Health and MyFitnessPal. This level of access to personal data has raised concerns in the community.

“Suddenly, medical data is being transferred from organizations that comply with HIPAA to providers that do not. It will be interesting to see how regulators respond,” noted Itai Schwartz, co-founder of MIND.

Over 230 million individuals engage with ChatGPT regarding health inquiries on a weekly basis. Many have stopped “Googling” symptoms, opting for the chatbot as their source of information.

“This is one of the biggest applications of ChatGPT. Therefore, it’s logical that they would want to create a more private, secure, and optimized version of the chatbot for healthcare inquiries,” Schwartz emphasized.

The main issue with chatbots remains “hallucinations,” which is particularly critical in the healthcare field. A study by Vectara showed that OpenAI’s GPT-5 “hallucinates” more frequently than competitors from Google and Anthropic.

However, Stanford University’s Professor of Medicine Nigam Shah considers these concerns secondary. He argues that the real problem within the system is the difficulty of accessing doctors, rather than the risk of receiving incorrect advice from ChatGPT.

“Currently, if you contact any healthcare system and want to see a primary care physician, you may have to wait three to six months. If given the choice to wait half a year to see a real specialist or to talk right away with someone who can help, what would you choose?” he remarked.

Administrative tasks can consume about half of a doctor’s time, significantly reducing the number of patients they can see. Automating these processes would allow specialists to focus more on their patients.

Dr. Shah leads a team at Stanford developing ChatEHR—software designed to help physicians work more efficiently with electronic medical records.

“Making it more user-friendly will allow doctors to spend less time searching for necessary information,” stated Dr. Sneha Jain, one of the first testers of ChatEHR.

In January, Anthropic announced the launch of Claude for Healthcare—a toolkit for healthcare providers and patients.