AI searches- be careful!: I am posting this as... - CLL Support

CLL Support

23,337 members40,043 posts

AI searches- be careful!

Rico49 profile image
24 Replies

I am posting this as information only. Someone posted a while back about google vs Ai and pointed out how new this AI is.

I just had an experience that has me being VERY careful using this technology going forward. What I like about the AI / Microsoft search is it post references for where the information was derived.

So this is what happened.

I searched "can CLL cause issues with the heart."

result:

"Yes, chronic lymphocytic leukemia (CLL) may develop heart conditions as well. Over half of people with CLL treated with Imbruvica (ibrutinib) may develop atrial fibrillation (AFib). During AFib, the heart beats irregularly. AFib can lead to an increased risk of more serious cardiovascular problems1. In leukemia, it is possible for cancer cells to infiltrate blood vessels, causing issues such as ischemic cardiac disease, which is also called coronary artery disease (CAD). This is where the heart does not receive an adequate supply of blood and oxygen2."

The next to last sentence says: "In leukemia, it is possible for cancer cells to infiltrate blood vessels, causing issues such as ischemic cardiac disease, which is also called coronary artery disease (CAD). "

I then searched : is it rare and got:

"CLL can cause ischemic cardiac disease but it is rare. The incidence of ischemic heart disease in CLL patients is 1.5%."

so then I went to the references noted and was unable to find the specifics to support the claims.

I then asked the question a little differently and got a different answer- "no it didn't cause ischemic cardiac disease."

I then challenged the search engine by saying it had previously said it did cause ischemic cardiac disease.

This was it's answer:

I apologize for the confusion. You are correct. I made a mistake in my previous response. I could not find any documentation that shows CLL can cause ischemic cardiac disease.

I hope this clears up any confusion. Let me know if you have any other questions.

So be very careful if you are searching for information. Confirm the references. At least you get a reference. Google just tells you what they want. What surprised me was it changed it's answer after I challenged their results. Remember I reviewed the references noted and could not find anything relevant.

You still need to feel comfortable with the reference. This is an example of why AI is scarey.

Written by
Rico49 profile image
Rico49
To view profiles and participate in discussions please or .
Read more about...
24 Replies
Thundercat2 profile image
Thundercat2

Ugh. I can't think of anything else to say!

AussieNeil profile image
AussieNeilPartnerAdministrator

Thanks, that's a very instructive example of what ChatGPT researchers call "Hallucination", but we would call lying. I've mentioned a similar but less dangerous example in my recent post about this technology breakthrough, but with limitations

healthunlocked.com/cllsuppo...

Neil

Rico49 profile image
Rico49 in reply toAussieNeil

I thought it was you that wrote that article a few months ago.I was just amazed when it said that it had made a mistake and gave me all that incorrect information. scarey!

Both OpenAI/Microsofts ChaptGPT and Google's BARD are "Large Language Models"

They are demonstrations of language understanding, and are great fun to have conversations with.

BUT...

They are not, and were never intended to be, factually accurate.

Take anything they provide with a big grain of salt.

Rico49 profile image
Rico49 in reply toOwnedByCockerSpaniel

where did you get this information ? thx

OwnedByCockerSpaniel profile image
OwnedByCockerSpaniel in reply toRico49

theguardian.com/technology/...

theguardian.com/commentisfr...

nlp.stanford.edu/pubs/tamki...

moveworks.com/insights/larg...

"1. Inconsistent accuracy

Large language models, including OpenAI's GPT-3.5, are powerful tools that can provide accurate responses to complex questions. However, despite their impressive capabilities, there is still a risk of inaccurate or false responses, known as "hallucination."

This phenomenon can have serious implications in critical industries like healthcare and business operations. It is essential to implement safeguards such as human oversight to refine inputs and control outputs to mitigate this risk. Currently, many applications of large language models require human supervision to ensure reliable results."

Also

theguardian.com/technology/...

Rico49 profile image
Rico49 in reply toOwnedByCockerSpaniel

Interesting article in a newspaper basically written by one player. I don't put newspapers in the catagory of unbiased. Sam Altmon is selling his product.

Stanford is not always a good source- very political and biased in many areas.

Forrester names Moveworks- so who is moveworks and what makes them an expert?

You are making my point. Know who your references. The media is the last place I would put my trust , they report what they want you to hear. That's why this is scarey.

If you look at a research report at the national institute of health or the other medical journals and don't trust them then I wouldn't search anything- period.

You quoted"

It is essential to implement safeguards such as human oversight to refine inputs and control outputs to mitigate this risk. Currently, many applications of large language models require human supervision to ensure reliable results."

Amen! For now that is "US" as we use the AI search tool.

OwnedByCockerSpaniel profile image
OwnedByCockerSpaniel in reply toRico49

I agree. Sometime in the future these chatbots may help with diagnosis and support.

BUT NOT yet.

The technology is still too early. The training takes huge numbers of Web pages and social media to get the chatbots to learn conversational style. That data, not only includes Scientific data, but a lot of conspiracy sites, comments and pure dis-information.

They will likely learn and improve. Human oversight and correction should teach them the difference.

I am urging caution to anyone on this group.

theconversation.com/how-goo...

Rico49 profile image
Rico49 in reply toOwnedByCockerSpaniel

Amen! That's exactly why I wrote the initial post. Good luck and good health for all!

JigFettler profile image
JigFettlerVolunteer

It's very important to address this issue, especially when we are formulating our beliefs taking our individual CLL situations forward.

How do our Medical Advisors feel when we come to our respective consultations armed with AI Chatbot generated information?

Which Chatbot thingy do we use even?

How do we as patients resolve information conflicts as we go forward on our CLL journeys?

I am immediately gripped by the desire NOT to lose my ability to think, plan and problem solve myself.

I can foresee issues arising going forward for us all as we are forced to choose options based on information obtained by unverified means.

Google was ground breaking and exciting I seem to recall. I am not so comfortable with this AI business.

However what is a comfort is this, our support forum, where we can share, debate and guide each other.

No doubt the tech wizards amongst will emerge with wizardry wisdom.

Jig

... I can feel a day of deep reflection coming on.

Rico49 profile image
Rico49 in reply toJigFettler

You said very well what I was trying to say. I am an engineer who was trained in problem solving like all engineers. Bottom line is know the sources. Is the reference a medical journal or something someone authored who may have a bias. It complicates the process significantly because you have to confirm the findings in the search with the references. At least with AI you get references. I have only used Microsoft's Bing version to date.

Remember AI is still very new.

bennevisplace profile image
bennevisplace in reply toRico49

What troubles me most is that "know the sources" is going to become increasingly difficult. Already we have fake websites mimicking the real thing so well that in just a few hours many people open their bank and credit card accounts to them thinking they're making a purchase.

We also have fake information sources, i.e. plausible articles written by bots and inserted into reputable publications under the name of real journalists.

The experts predict that AI will ratchet up such nefarious activities. Then there's state sponsored cyber-warfare... Regulation? Ha!

Rico49 profile image
Rico49 in reply tobennevisplace

You just nailed it.it will become difficult to even trust sources that you think are trustworthy because you don't know if it really came from them. someone will publish a medical report on the New England medicine format that look exactly what the real site looks like. this is what makes AI search or AI. so scary.

bennevisplace profile image
bennevisplace

Veeeery interesting. AI, designed by HI, still bears the hallmarks of HI. Isn't your experience pretty much how you'd expect a human to respond? Come up with a glib, half-researched answer, then refine or correct it when challenged.

These AI systems will quickly become more sophisticated though, and like it or not they are going to challenge existing knowledge platforms sciencefocus.com/future-tec...

If we think that cyber-crime and web-spread disinformation are big problems now, just wait criticalinsight.com/resourc...

Already ChatGPT has the capacity to spontaneously generate not only misinformation but fake sources to back it up, apparently without the input of bad actors theguardian.com/commentisfr...

Rico49 profile image
Rico49 in reply tobennevisplace

Exactly!

mdsp7 profile image
mdsp7

Thank you for the heads up.

Yes, that rings true. Recently my husband's colleague quizzed an AI "mind" about a physics question, and similarly got credible sounding falsehood after falsehood, for which it provided fake references, and, when called out, apologized, claimed it was only a computer, then made more stuff up, again, sounding confidant and backing up its claims with fake references, and when called out again, apologized again, and round and round.

Rico49 profile image
Rico49 in reply tomdsp7

Amen!

Snakeoil profile image
Snakeoil

We have learnt how to use search engines. With experience, search results improve.

Similarly, it takes experience to formulate questions in a way that helps ChatGPT to be more accurate.

It simply picks the most probable next phrase given a previous phrase.

It is incredible to me that such a strategy can write so convincingly. And irritating that it is equally convincing when blatantly wrong. On the other hand, I have colleagues with similar behaviour 🤪

Anyway, I have found that the key is lots and lots of context. In some sense, the opposite of search engines.

Rico49 profile image
Rico49 in reply toSnakeoil

Yes the problem is we won't be able to trust the sources. They can be fake. so now if you do a search and it shows you the sources ,you have to go to that source and probably their website to confirm that it's a valid report. so at the end of the day a search engine is supposed to speed things up but in reality it will just slow things down because you're going to have to verify every single step along the way.

Snakeoil profile image
Snakeoil in reply toRico49

For important things I always verify the primary source anyway. Anyone can misinterpret. It behaves just like a random person on the internet.

NoClew profile image
NoClew

Have not tried an AI search. Prefacing my Google searches with "scholarly articles only" seems to have served me well. Often results will start with clearly identified paid "ADs" , then go directly to NIH, Mayo Clinic, Cleveland Clinic and the like. Do hope AI does not interfere.

Rico49 profile image
Rico49 in reply toNoClew

The problem is going forward you won't know if that article is really from the Mayo clinic. could be a fake report to look like it was made by Mayo.

scryer99 profile image
scryer99

Public-facing AI is a good party trick. But it is prone to hallucination and presents things in ways that sound good to human ears - after all, it's been optimized to do that. And attempts to make it better are in many cases making it much worse, for technical reasons having to do with how AI models are built.

Treat it like a Wikipedia written by teenagers. It might be a good starting point for research, but ultimately, its advice is worth what you paid for it.

Rico49 profile image
Rico49 in reply toscryer99

I couldn't have said it better.

Not what you're looking for?

You may also like...

Why white blood counts can be high or low. Could a high count be blood cancer, in particular Chronic Lymphocytic Leukemia (CLL)?

If you are concerned about receiving a blood test result with a high white blood cell (WBC) count...
AussieNeil profile image
Partner

Determining if you'll never need treatment. It is feasible and safe to stop specialized follow-up of asymptomatic lower risk CLL?

Regular readers will have often heard that around 30% of those diagnosed with CLL never need...
AussieNeil profile image
Partner

Disease Flare During Temporary Interruption of Ibrutinib Therapy in Patients with Chronic Lymphocytic Leukemia ( CLL ) - Pseudo Richters

Several of us have reported our CLL symptoms have come roaring back after discontinuing...
AussieNeil profile image
Partner

Managing the cardiovascular risk of Bruton's Tyrosine Kinase Inhibitors in chronic lymphocytic leukemia

The American Society of Hematology journal Blood Advances published an important...
CLLerinOz profile image
Administrator

WARNING to CLL patients about the UK shingles vaccination campaign - CLL patient's are immune compromised and should not be included.

WARNING Please take note of this warning and instruction now we have received further...
HAIRBEAR_UK profile image
Founder Admin