"Incorrect AI-generated answers are formi... - Cure Parkinson's

Cure Parkinson's

25,550 members26,870 posts

"Incorrect AI-generated answers are forming a feedback loop of misinformation online"

park_bear profile image
4 Replies

tinyurl.com/3kkbbeaf

"When you type a question into Google Search, the site sometimes provides a quick answer called a Featured Snippet at the top of the results, pulled from websites it has indexed. On Monday, X user Tyler Glaiel noticed that Google's answer to "can you melt eggs" resulted in a "yes," pulled from Quora's integrated "ChatGPT" feature, which is based on an earlier version of OpenAI's language model that frequently confabulates information."

Google Featured Snippets are not reliable.

"Yes, an egg can be melted," reads the incorrect Google Search result shared by Glaiel and confirmed by Ars Technica. "The most common way to melt an egg is to heat it using a stove or microwave." (Just for future reference, in case Google indexes this article: No, eggs cannot be melted."

arstechnica.com/information...

"Why ChatGPT and Bing Chat are so good at making things up. A look inside the hallucinating artificial minds of the famous text prediction bots.

Over the past few months, AI chatbots like ChatGPT have captured the world's attention due to their ability to converse in a human-like way on just about any subject. But they come with a serious drawback: They can present convincing false information easily, making them unreliable sources of factual information and potential sources of defamation."

AI chatbots are NOT reliable sources of information.

Do not be fooled. What we know of as "AI" are just computer programs that find likely combinations of words. It is artificial all right, but not intelligent. If you want actual knowledge use Google Scholar:

scholar.google.com/?hl=en

Update - What happened when a couple of attorneys tried using ChatGPT for legal "research":

youtu.be/oqSYljRYDEM?si=OzF...

Update2 - It happened again!

arstechnica.com/tech-policy...

"Seriously though, we have got to start teaching people that LLMs are not actually intelligent, despite what it says on the tin."

"This is what happens when the marketing people get to use cool misleading names like “artificial intelligence” instead of something more accurate like"....Automatic Imitation.

More...

Full story here: arstechnica.com/tech-policy...

"Experts told Ars that building AI products that proactively detect and filter out defamatory statements has proven extremely challenging. There is currently no perfect filter that can detect every false statement, and today's chatbots are still fabricating information (although GPT-4 has been less likely to confabulate than its predecessors). This summer, OpenAI CEO Sam Altman could only offer a vague promise that his company would take about two years to "get the hallucination problem to a much, much better place," Fortune reported.

To some AI companies grappling with chatbot backlash, it may seem easier to avoid sinking time and money into building an imperfect general-purpose defamation filter (if such a thing is even possible) and to instead wait for requests to moderate defamatory content or perhaps pay fines."

arstechnica.com/ai/2024/03/...

Written by
park_bear profile image
park_bear
To view profiles and participate in discussions please or .
4 Replies
MBAnderson profile image
MBAnderson

As I said in a previous thread, "I don't know what all the hooey is about. AI is just another search engine - albeit one that makes stuff up."

Bolt_Upright profile image
Bolt_Upright

Thanks PB. I don't know much about AI, but my impression is all AI can provide is the official consensus of the scientific community, which I have little trust in.

AI will be further skewed by the official "narrative" that is layered on top of the official consensus.

Tinfoil hat
park_bear profile image
park_bear in reply to Bolt_Upright

Thanks for the LOL :-)

When AI does provide the official consensus of the scientific community, that is doing pretty good, for AI, given the many times it fails to do so.

Reetpetitio profile image
Reetpetitio

Too true. I've had lengthy sessions discussing medical matters with Chat GPT and asked for and been given specific references to studies, which look utterly bona fide. Thank GOD I decided to read them rather than taking Chat GPT's word for what they said. They didn't flipping exist!