There were 470 press releases posted in the last 24 hours and 398,587 in the last 365 days.

People to Confide in AI Chatbots for Healthcare

ai chatbot healthcare

height adjustable esd laminate workbench

Shown above is a custom, height-adjustable workbench featuring upper storage, a monitor mount, and the ESD worksurface kit to protect sensitive microelectronics from damage caused by static shock.

blue basix with keyboard pullout

Formaspace offers a full range of furniture options for classrooms and educational laboratories, lab furniture for biotech and healthcare research, industrial furniture for factories, as well as furniture for government, and military applications.

Find out more about the potential for AI-based Chat systems to help healthcare providers take better care of their patients.

The widespread adoption of online healthcare portals (part of the push toward electronic records) has led to an overwhelming increase in the number of electronic messages sent by patients.”
— Formaspace
AUSTIN, TEXAS, UNITED STATES, July 23, 2024 /EINPresswire.com/ -- Tea And Sympathy: Do AI-Based Chat Systems Have What It Takes To Offer Accurate And Empathetic Patient Care?

AI-based chat systems are getting “smarter” – but are they smart enough to offer reliable patient care?

One of the first available scientific studies indicates that AI-based Chat systems can perform admirably when answering patient questions.

A research team headed by Dr. John W. Ayers at the University of California San Diego recently set out to evaluate which is better, ChatGPT or a human physician.

The team compiled a set of approximately 200 typical patient questions (extracted from the AskDocs forum on Redditt.)

A three-panel jury of licensed healthcare providers evaluated the responses – without knowing if the answer was provided by a human physician or by ChatGPT.

If you are a physician reading this article, but the results came back strong in favor of ChatGPT – the three-panel jury preferred the ChatGPT response nearly 80% of the time.

The jurors rated ChatGPT’s responses to common medical questions as being of much higher quality 78.5% of the time (versus only 22.1% for the physician-sourced answers.)

ChatGPT also demonstrated a much higher level of empathy toward patients (in 45% of responses), a statistically higher achievement than the 4.6% of times the physician’s answer was more empathetic.

Should physicians be concerned about AI taking over their jobs?

In the short term, the answer is no.

Instead, AI might be a godsend for helping healthcare providers keep up with their ever-growing workload. The widespread adoption of online healthcare portals (part of the push toward electronic records) has led to an overwhelming increase in the number of electronic messages sent by patients. Reading, prioritizing, and responding to these messages has become very burdensome for healthcare providers, so AI-based tools could help a lot – either by reading the inquiries and providing draft responses offline (for the provider to review and send to the patient) or by “chatting” with the patient directly.

AI Chat Trustworthiness Is Marred By Potential Hallucinations. How Will Patients React?

Unfortunately, the Large Language Model (LLM) systems that power AI-based chat systems can sometimes get it wrong.

This can happen either because the language models were trained on faulty data or they happen upon a gap or glitch in their “knowledge,” which they sometimes “fill in” with related though potentially incorrect information.

(Depending on the circumstances, providing wrong answers can lead to legal exposure – as this recent Wall Street Journal article discusses.)

A recent (now infamous) non-medical example comes to us from Google’s AI Overviews feature, which suggested (incorrectly, we assure you!) that the best way to keep the cheese topping from sliding off pizza is to glue it in place!

AI researchers are trying to figure out ways to reduce or eliminate these so-called “hallucination” occurrences, in which the AI systems provide incorrect information.

Faulty recipes could be lethal if followed by unsuspecting cooks. The same applies to medical advice. Imagine a case where a patient is asking an AI-based chat system about some concerning symptoms they are having, such as pain in their chest.

Given today’s current technology, it’s not prudent to allow the AI-based chat system to diagnose serious health conditions – such as the onset of a heart attack or a bout with Gastroesophageal reflux disease (GERD) – without consulting a human health care provider.

On the other hand, AI-based systems might be quicker at scanning all the incoming requests than humans, helping to prioritize potentially critical incoming patient concerns to the highest priority for attending care providers to address immediately.

The bottom line is that state-of-the-art AI-based chat tools seem ready to collect patient information (which they can ask the patient to review and confirm before submitting). They can also create draft responses, but from a duty of care (and malpractice perspective), the provider should review everything before issuing orders, updating the medical records, prescribing medications, etc.

Are AI Chat-Based Patient Care Systems Susceptible To Fraud?

Fraudulent actors have had a field day with AI-based tools.

In 2019, criminals were able to “clone” the voice of a German CEO to place a fraudulent voice call to one of his direct reports (the head of a company subsidiary based in the UK.) The synthesized voice convinced the victim to expedite a $243,000 payment to a Hungarian supplier, which the criminals subsequently transferred to a bank in Mexico and other locations.

Unfortunately, telehealth has already become a conduit for criminal activity.

There have already been cases of company insiders taking advantage of the more relaxed Post-Covid rules for prescribing drugs to patients via telehealth calls – the CEO of Done Global, Ruthia He, and clinical president David Brody, were recently arrested for running a “pill mill” operation to distribute Adderall and other stimulants to patients who didn’t meet the prescription requirements. The criminal charges accuse Re, Brody, and the company of collecting $100 million in fraudulent prescription reimbursements.

While we are not aware of any such cases, organized drug trafficking gangs could also theoretically leverage AI-based chat systems to automate prescription requests for controlled substances (such as opioids) – tricking unwitting providers into prescribing medications for multiple fake AI patients.

Read more...

Julia Solodovnikova
Formaspace
+1 800-251-1505
email us here
Visit us on social media:
Facebook
X
LinkedIn

Legal Disclaimer:

EIN Presswire provides this news content "as is" without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the author above.