I am posting this as information only. Someone posted a while back about google vs Ai and pointed out how new this AI is.
I just had an experience that has me being VERY careful using this technology going forward. What I like about the AI / Microsoft search is it post references for where the information was derived.
So this is what happened.
I searched "can CLL cause issues with the heart."
result:
"Yes, chronic lymphocytic leukemia (CLL) may develop heart conditions as well. Over half of people with CLL treated with Imbruvica (ibrutinib) may develop atrial fibrillation (AFib). During AFib, the heart beats irregularly. AFib can lead to an increased risk of more serious cardiovascular problems1. In leukemia, it is possible for cancer cells to infiltrate blood vessels, causing issues such as ischemic cardiac disease, which is also called coronary artery disease (CAD). This is where the heart does not receive an adequate supply of blood and oxygen2."
The next to last sentence says: "In leukemia, it is possible for cancer cells to infiltrate blood vessels, causing issues such as ischemic cardiac disease, which is also called coronary artery disease (CAD). "
I then searched : is it rare and got:
"CLL can cause ischemic cardiac disease but it is rare. The incidence of ischemic heart disease in CLL patients is 1.5%."
so then I went to the references noted and was unable to find the specifics to support the claims.
I then asked the question a little differently and got a different answer- "no it didn't cause ischemic cardiac disease."
I then challenged the search engine by saying it had previously said it did cause ischemic cardiac disease.
This was it's answer:
I apologize for the confusion. You are correct. I made a mistake in my previous response. I could not find any documentation that shows CLL can cause ischemic cardiac disease.
I hope this clears up any confusion. Let me know if you have any other questions.
So be very careful if you are searching for information. Confirm the references. At least you get a reference. Google just tells you what they want. What surprised me was it changed it's answer after I challenged their results. Remember I reviewed the references noted and could not find anything relevant.
You still need to feel comfortable with the reference. This is an example of why AI is scarey.
Written by
Rico49
To view profiles and participate in discussions please or .
Thanks, that's a very instructive example of what ChatGPT researchers call "Hallucination", but we would call lying. I've mentioned a similar but less dangerous example in my recent post about this technology breakthrough, but with limitations
I thought it was you that wrote that article a few months ago.I was just amazed when it said that it had made a mistake and gave me all that incorrect information. scarey!
Large language models, including OpenAI's GPT-3.5, are powerful tools that can provide accurate responses to complex questions. However, despite their impressive capabilities, there is still a risk of inaccurate or false responses, known as "hallucination."
This phenomenon can have serious implications in critical industries like healthcare and business operations. It is essential to implement safeguards such as human oversight to refine inputs and control outputs to mitigate this risk. Currently, many applications of large language models require human supervision to ensure reliable results."
Interesting article in a newspaper basically written by one player. I don't put newspapers in the catagory of unbiased. Sam Altmon is selling his product.
Stanford is not always a good source- very political and biased in many areas.
Forrester names Moveworks- so who is moveworks and what makes them an expert?
You are making my point. Know who your references. The media is the last place I would put my trust , they report what they want you to hear. That's why this is scarey.
If you look at a research report at the national institute of health or the other medical journals and don't trust them then I wouldn't search anything- period.
You quoted"
It is essential to implement safeguards such as human oversight to refine inputs and control outputs to mitigate this risk. Currently, many applications of large language models require human supervision to ensure reliable results."
Amen! For now that is "US" as we use the AI search tool.
I agree. Sometime in the future these chatbots may help with diagnosis and support.
BUT NOT yet.
The technology is still too early. The training takes huge numbers of Web pages and social media to get the chatbots to learn conversational style. That data, not only includes Scientific data, but a lot of conspiracy sites, comments and pure dis-information.
They will likely learn and improve. Human oversight and correction should teach them the difference.
You said very well what I was trying to say. I am an engineer who was trained in problem solving like all engineers. Bottom line is know the sources. Is the reference a medical journal or something someone authored who may have a bias. It complicates the process significantly because you have to confirm the findings in the search with the references. At least with AI you get references. I have only used Microsoft's Bing version to date.
What troubles me most is that "know the sources" is going to become increasingly difficult. Already we have fake websites mimicking the real thing so well that in just a few hours many people open their bank and credit card accounts to them thinking they're making a purchase.
We also have fake information sources, i.e. plausible articles written by bots and inserted into reputable publications under the name of real journalists.
The experts predict that AI will ratchet up such nefarious activities. Then there's state sponsored cyber-warfare... Regulation? Ha!
You just nailed it.it will become difficult to even trust sources that you think are trustworthy because you don't know if it really came from them. someone will publish a medical report on the New England medicine format that look exactly what the real site looks like. this is what makes AI search or AI. so scary.
Veeeery interesting. AI, designed by HI, still bears the hallmarks of HI. Isn't your experience pretty much how you'd expect a human to respond? Come up with a glib, half-researched answer, then refine or correct it when challenged.
These AI systems will quickly become more sophisticated though, and like it or not they are going to challenge existing knowledge platforms sciencefocus.com/future-tec...
If we think that cyber-crime and web-spread disinformation are big problems now, just wait criticalinsight.com/resourc...
Already ChatGPT has the capacity to spontaneously generate not only misinformation but fake sources to back it up, apparently without the input of bad actors theguardian.com/commentisfr...
Yes, that rings true. Recently my husband's colleague quizzed an AI "mind" about a physics question, and similarly got credible sounding falsehood after falsehood, for which it provided fake references, and, when called out, apologized, claimed it was only a computer, then made more stuff up, again, sounding confidant and backing up its claims with fake references, and when called out again, apologized again, and round and round.
We have learnt how to use search engines. With experience, search results improve.
Similarly, it takes experience to formulate questions in a way that helps ChatGPT to be more accurate.
It simply picks the most probable next phrase given a previous phrase.
It is incredible to me that such a strategy can write so convincingly. And irritating that it is equally convincing when blatantly wrong. On the other hand, I have colleagues with similar behaviour 🤪
Anyway, I have found that the key is lots and lots of context. In some sense, the opposite of search engines.
Yes the problem is we won't be able to trust the sources. They can be fake. so now if you do a search and it shows you the sources ,you have to go to that source and probably their website to confirm that it's a valid report. so at the end of the day a search engine is supposed to speed things up but in reality it will just slow things down because you're going to have to verify every single step along the way.
Have not tried an AI search. Prefacing my Google searches with "scholarly articles only" seems to have served me well. Often results will start with clearly identified paid "ADs" , then go directly to NIH, Mayo Clinic, Cleveland Clinic and the like. Do hope AI does not interfere.
Public-facing AI is a good party trick. But it is prone to hallucination and presents things in ways that sound good to human ears - after all, it's been optimized to do that. And attempts to make it better are in many cases making it much worse, for technical reasons having to do with how AI models are built.
Treat it like a Wikipedia written by teenagers. It might be a good starting point for research, but ultimately, its advice is worth what you paid for it.
Content on HealthUnlocked does not replace the relationship between you and doctors or other healthcare professionals nor the advice you receive from them.
Never delay seeking advice or dialling emergency services because of something that you have read on HealthUnlocked.