When query asked for least harm to heart of anxiety drugs, perplexity showed lexapro among the most harmful in first entry, but when question was asked about Ssri's in comparison, lexapro was listed with sertraline as least harmful. Go figure.
Warning - perplexity may report contr... - Advanced Prostate...
Warning - perplexity may report contradictory info.


Not surprising, all AIs may give incorrect or contradictory responses although some are worse than others (deepseek). I use various AI models professionally and see errors daily.
I suspect an oncologist would find a lot of errors that we just accept.
You should never rely on the responses in a life or death situation.
AI is useful for learning, research and it can suggest things you may not have thought of.
It can be a great tool but people need to learn to use it effectively and to understand the limitations.
What model did you use and what were your exact "prompts"? Try with "pro" and/or "deep research".
You may have a look here:
healthunlocked.com/fight-pr...
DeepSeek proved the least bad of ChatGPT, Gemini and Perplexity, in that order.
What will be interesting to find out is whether you or anyone else, (not me as they may be keeping chat archives per IP) posed the same question the answer would be again wrong vs corrected, indicating that they are capable of quick learning after being trained.
You can't extrapolate performance in one domain (or problem) to another. And this is a sample size of one.
"Chatgpt" is not one thing. The default in thevfree version is gpt4 or a variant. Gpt4 is an older model that has been superceded by o3 and o1 reasoning models.
You get o3-mini in the free version when you select "reason".
O3 and O1 are paid versions and take quite a bit of time to respond.
Seems the sources used are the problem, such as popular press vs pubmed studies. Pubmed studies also need scrutiny, as you know.
AI is just a way to become agnorant faster. Your skepticism about all its output is warranted.
It is a tool.
A colleague of mine has always said "a fool with a tool is still a fool". This was well before AI but it applies here.
All tools have limitations and it takes time and effort to learn to use any tool effectively.
💯 I find it annoying because it provides so many "false positives" that have to be eliminated. In the end it takes me more time to use it than not to use it.
I just learned there is a term for that: Brandolini's Law. The law states:
"The amount of energy needed to refute bullshit is an order of magnitude bigger than that needed to produce it."
I learned about Brandolini's law in this book which I highly recommend to anyone.
goodreads.com/book/show/488...
Bullshit isn’t what it used to be. Now, two science professors give us the tools to dismantle misinformation and think clearly in a world of fake news and bad data.
They also give a course as well but I haven't done it.
I suggest anyone to try this experiment with your preferred AI model. Ask it a question regarding something you personally consider yourself an expert in. I find the answers to my questions are either A) A demonstration of understanding at a Middle school or High school level at best. B) A regurtatation of the question back to you in various ways. C) Just plain wrong. I would never trust it about a subject I knew little about.
I find Grok pretty accurate. It gives links to most items it says so we can look it up ourselves. Put my sisters blood test results in it and it caught a UTI and said she needed to get to the doc. Did this AFTER she was at her doc on blood test results and the doc missed it. She went to ER and got treated. I will take Grok over Doc in many cases, especially the busy, see a patient every 15-minute type. I even asked a question and made a statement based on a study I have used for sulforaphane without mentioning a study at all. In the response, it gave me the cite to the BROK study. Perplexity is not even close and cannot search as needed.
Hey the best is the or a derivative there of, All synthesized drugs are processed using petroleum. It's a known fact what carcinogens roll with that. Try Gummi or other low impact edible, you'll be surprised.
Ai is programmed on propaganda. The same propaganda that, to give one example is the theory someone ate a bat in a wet market in China that spread covid. Unless your living under a rock you should know it's not true. Now, it may be perceived it was just a accidental leak in a lab in China. Ha! Actually Fauci and friends sent funds and the ability to do gain of function research in China because it was illegal to do so here in the US. Using chatgpt is like watching the TV set.
Scientist and medical influencers writing papers can be bought cheaper than politicians.
Good thing to remember.
May I make a suggestion, Ai is in the early innings so use it with some trepidation and caution. Compare it's results on multiple platforms, Grok3-beta, Anthropic's Claude, Gemini etc Here in Canada we have been left hanging without a G.P. (over 6 million of us!) so Ai has been extremely useful by translating pages of highly technical papers from sites like PubMed. Again i'll simply copy/paste into various Ai's and ask the same question..."please translate for a luddite laymen" or whatever lol. These Ai's have been tested and in some cases are PHD levels in select benchmarks. Don't be afraid to swing for the fences, but just understand that what you may be pitched are a few change ups. Cheers
I use it........... but I always wonder about the A.
Does Artificial mean not real?
not produced by natural forces; artificial or fake. fake, false, faux, imitation, simulated. not genuine or real; being an imitation of the genuine article. man-made, semisynthetic, synthetic. not of natural origin; prepared or made artificially.
Sounds like it's a description of my ex-wife.
Good Luck, Good Health and Good Humor.
j-o-h-n
Platforms seem to compete over which information can be provided. Perp seems ok for consumer reviews ingredients basics of meds. I got a fairly good rundown if 18.4 upgrade will work for my old iphone. ChatGpt has been much better for science and medicine. I would like to find one that can identify a skin disease from a photo. Or the name of a composition from a sound file. Havent tried putting in blood labs into one to calculate NLR,PLR, and so forth. Cheers!