"When you type a question into Google Search, the site sometimes provides a quick answer called a Featured Snippet at the top of the results, pulled from websites it has indexed. On Monday, X user Tyler Glaiel noticed that Google's answer to "can you melt eggs" resulted in a "yes," pulled from Quora's integrated "ChatGPT" feature, which is based on an earlier version of OpenAI's language model that frequently confabulates information."
Google Featured Snippets are not reliable.
"Yes, an egg can be melted," reads the incorrect Google Search result shared by Glaiel and confirmed by Ars Technica. "The most common way to melt an egg is to heat it using a stove or microwave." (Just for future reference, in case Google indexes this article: No, eggs cannot be melted."
arstechnica.com/information...
"Why ChatGPT and Bing Chat are so good at making things up. A look inside the hallucinating artificial minds of the famous text prediction bots.
Over the past few months, AI chatbots like ChatGPT have captured the world's attention due to their ability to converse in a human-like way on just about any subject. But they come with a serious drawback: They can present convincing false information easily, making them unreliable sources of factual information and potential sources of defamation."
AI chatbots are NOT reliable sources of information.
Do not be fooled. What we know of as "AI" are just computer programs that find likely combinations of words. It is artificial all right, but not intelligent. If you want actual knowledge use Google Scholar:
Update - What happened when a couple of attorneys tried using ChatGPT for legal "research":
youtu.be/oqSYljRYDEM?si=OzF...
Update2 - It happened again!
arstechnica.com/tech-policy...
"Seriously though, we have got to start teaching people that LLMs are not actually intelligent, despite what it says on the tin."
"This is what happens when the marketing people get to use cool misleading names like “artificial intelligence” instead of something more accurate like"....Automatic Imitation.
More...
Full story here: arstechnica.com/tech-policy...
"Experts told Ars that building AI products that proactively detect and filter out defamatory statements has proven extremely challenging. There is currently no perfect filter that can detect every false statement, and today's chatbots are still fabricating information (although GPT-4 has been less likely to confabulate than its predecessors). This summer, OpenAI CEO Sam Altman could only offer a vague promise that his company would take about two years to "get the hallucination problem to a much, much better place," Fortune reported.
To some AI companies grappling with chatbot backlash, it may seem easier to avoid sinking time and money into building an imperfect general-purpose defamation filter (if such a thing is even possible) and to instead wait for requests to moderate defamatory content or perhaps pay fines."