A New study ..."When Patient Questions Are Answered With Higher Quality and Empathy by ChatGPT than Physicians" is more than interesting.
A new study with significant implicat... - Atrial Fibrillati...
A new study with significant implications


Not surprising because Chat Box's are not held to a one sentence response like many doctors. I read another study where the ChatBox outperformed doctors in terms of diagnostic accuracy.
Like doctors, a ChatBox, while sounding authoritarian, do make factual mistakes. Unlike doctors, if you correct a ChatBox, they will actually apologize and usually then cite the right answer.
Jim
Oh yeah. If you know your subject, you will sooner or later catch the ChatBox making a mistake.
I just say something like" I think you're wrong, the answer should be ....., please look again". And unlike a doctor -- or many other humans for that matter -- it takes criticism very well!
PS ChatBox's are great, but always double check anything important with good source references found per Google. I believe Bing's version, will actually give those links to you.
Jim
It's a great research companion to google especially for complex questions which need to draw on multiple sources. Major problem is -- like your doctor -- that it has the same authoritarian and persuasive tone whether its answers are correct or false. So, like with your doctor, always cross check 😀😀😀
I would hope that anyone who posts here using Google's words, always posts the source of the information. Same with using a ChatBox. But you bring up a problem with ChatBox's moving forward. Some, like Musk, think it will doom civilization as we know it. Some think we already are doomed. 😀😀😀
The study compares the chatbot with entries typed into social media by physicians, it's not exactly comparing chatbot output with physician output. Chatgpt is very good at smoothing out and blanderising text so this is no surprise. Ironically, the chatbots would need to become even better, and they will do.
AI uses material that is on line for good, bad, or indifferent. I asked Chat GPT the number of Group 1 drivers whom the DVLA had revoked due a non-compliant visual field. The AI answer included people who had failed the start of the driving test when you are asked to read a registration plate at 20 metres. A person with a non-complaint visual field can have macula sparing (like me) and be able to read a registration plate at 20 metres with or without glasses. Failing the visual field requirement is likely to be from an Esterman test on a driver who has a full licence.
Many years ago I worked for a consultancy where a Chartered Engineer had results from a spreadsheet. The results didn't look right to me, so I checked the spreadsheet cells and found mistakes that produced arithmetical errors. The answers had been accepted because they were from a computer and the spreadsheet was setup by a Chartered Engineer.
"multiple paths", I think you mean rabbit holes.