Ready to be diagnosed by ChatGPT?

Humans are not mathematical models. The COLLECTION of clinical data by a perceptive doc is critical to good medical care. I've seen repeatedly in my own family that folks don't always give complete or honest answers. Hypochondriac-types can overwhelm a busy office while very stoic folks may verbally deny chest pain while having a sever heart attack. Or break the news of a terminal diagnosis to a patient or family. I somehow doubt AI will ever replace that compassion that makes up so much of health care.

And don't forget that all AI algorithms are NOT the same. One I played with recently gave seriously wrong answers regarding different types of COVID tests. For those suing AI......Trust but Verify.
 
I'm not trying to say that AI is good or bad in healthcare but thought it could add to the conversation about what I heard ( as I remember it!). About 10 years ago I sat in on presentations by Google X (The R&D arm of Google) and IBM Watson. They both said they were going to get into health care through "big data" and become a major player, While Chat GPT is a different company the use of AI / data has been in development for a while. The two examples they gave back then was:

Dermatology- it takes forever to get an appointment and conditions like a mole growth could advance before a doctor could see it. They ran a program where a patient took a picture, sent it to the AI, and AI had a high accuracy rate of diagnosing if it was benign or needed to been seen quickly. The patients that needed to be seen quickly got into the practice much faster then previously.

Oncology: They analyzed several conditions ( cant remember which but some could be gene typed). By analyze I mean they scoured the internet for every peer reviewed study, every treatment protocol they could find, every poster presentation made at various oncology conferences, and interviewed about 100 leading oncologists and spun it through the AI they had at the time and came up with optimal treatment protocols for those conditions. Their outcome was for those conditions they studied was if they followed the protocols they had a 90% accuracy rate for diagnosis and treatment. The other 10% that were identified were flagged to the practice for more intensive care and other protocols.

My 2 cents editorial comments. Doctors have incredibly difficult jobs. Shrinking reimbursement, more demand as boomers age, and an expectation from the public that they are always avaible on a minutes notice, and know everything about all the conditions they treat. I knew an oncologist who worked at Dana Farber cancer center. He saw his hospital patients 7 to 8:30 am, did paperwork for about an hour, then saw his in office patients during the day. Many days after 5 he went back to hospital to see patients. With this type of schedule he was also expected to keep up with the newest treatments and take continuing education courses to stay current. This is impossible as there are only so many hours in a day. If AI can help doctors be more current, accurate, and remove mundane tasks and help with some diagnosis to free up their time to work on the more challenging cases I'm all for it. Time will tell if AI lives up to the hype.
Remember when they said computers would streamline everything, eliminate paper, and make things easier? Yet still problems.
 
And what happens when the AI is down, and the doctor has lost some of his/her knowledge because of not being challenged? How many times have you been told that "the system is down." And no one can do a simple task because they've become too accustomed to the computer performing that task.
How long is a computer typically down? A few minutes? An hour? Compare that to my family doctor who is down for weeks at a time because that's how long it takes me to get an appointment. I haven't seen my PCP in a few years because I can't get a timely appointment. I saw his nurse practitioner a couple times and the last time I used a telemedicine site that I found online.
 
The suing argument is silly.

Personally, I’m all for eliminating human error. I have no doubt that AI assisted diagnosis/care will lead to better outcomes, which the research so far is proving.

As humans, we overestimate our abilities.
+1. I don’t know if the tech is there yet, but I am sure AI will get better at diagnosis than humans. Way more data, constant learning and analysis capabilities.
 
I retired last year from 28+ years in subspecialty surgery. I actually didn't mind it when people handed me their research and assumptions. If they had done some reading it made our conversations more pertinent and quicker. Most people weren't too far off and we could work through diagnostic possibilities quicker. Now of course sometimes, people were completely off base, but I'd say most were close.

We didn't use any AI based tools for diagnosis , but it wouldn't have been a bad thing to have built in to the EHR. I'm sure EPIC and others are working on that now. In my field patients still needed a detailed exam and AI can't use an endoscope- however it could certainly help interpret images. Again, we didn't have that tool available.

As far as "paperwork" - it's endless and regulations and hospital administrations keep adding to it. The concept of a Doc having several supporting staff to handle that isn't realistic simply due to costs and the fact that the Doc must personally sign off on everything, assuming the responsibility and liability. Of course there is supporting staff from Nurses to office staff/manager etc. but there are huge amounts of stuff the Docs must deal with themselves such as every lab result, every Xray/CT/MRI must be reviewed and signed off by the Doc. Ughhh....I'm stopping because this is making relive "work". Today I am going to pull some weeds in the garden beds, work in my shop and go to the gym.
 
I retired last year from 28+ years in subspecialty surgery. I actually didn't mind it when people handed me their research and assumptions. If they had done some reading it made our conversations more pertinent and quicker. Most people weren't too far off and we could work through diagnostic possibilities quicker. Now of course sometimes, people were completely off base, but I'd say most were close.

We didn't use any AI based tools for diagnosis , but it wouldn't have been a bad thing to have built in to the EHR. I'm sure EPIC and others are working on that now. In my field patients still needed a detailed exam and AI can't use an endoscope- however it could certainly help interpret images. Again, we didn't have that tool available.

As far as "paperwork" - it's endless and regulations and hospital administrations keep adding to it. The concept of a Doc having several supporting staff to handle that isn't realistic simply due to costs and the fact that the Doc must personally sign off on everything, assuming the responsibility and liability. Of course there is supporting staff from Nurses to office staff/manager etc. but there are huge amounts of stuff the Docs must deal with themselves such as every lab result, every Xray/CT/MRI must be reviewed and signed off by the Doc. Ughhh....I'm stopping because this is making relive "work". Today I am going to pull some weeds in the garden beds, work in my shop and go to the gym.
Nice post, and tells the story from inside the system.
 
Back
Top Bottom