In part two I reported on the ways in which publishers, aided by a small but dedicated band of sleuths, are countering the threat of plagiarised and phony scientific literature, which AI can facilitate on a vast scale healthunlocked.com/cllsuppo...
Now publishing giant Wiley has imposed the ultimate penalty on 19 journals permeated with ChatGPT-generated content
Shutting down real journals, however, doesn't prevent content being published by fake journals. Furthermore, AI may be used to promote those journals in genuine ranking studies, simply by exploiting the ranking algorithms. By the time the imposters are unmasked, havoc may be wreaked retractionwatch.com/2024/06...
If you're curious to know what a ChatGPT-generated scientific paper looks like, this is a fairly mundane example mdpi.com/2673-8937/3/2/18 from a list maintained by Retraction Watch retractionwatch.com/papers-... I suspect this is the tip of a growing iceberg.
Written by
bennevisplace
To view profiles and participate in discussions please or .
Looks like we're nearing a battle of the AI's. White hats v. Black hats, how will humans be pulled into the battle between them? They will 'recognize humans as pawns that can choose but are not infallible and quite easily manipulated.
AI is a solid gold anchor; but on a ship that is listing, does one retain their grasp?🕚💣🤔
Wow. That paper is utterly believable and has 2500 views . … which is probably vastly more than real research. I suspect we will need a higher standard of proof henceforth…
But are they real views? I know bots index web pages, and they're often confused as actual visits by many analytics programs. Honestly, I wouldn't be surprised if ai was already able to impersonate real "viewers." Given the complexity of the existing algorithms, it doesn't seem like much of a stretch.
There is already an outstanding "Replication Crisis" where many scientific papers have not been replicated to confirm the data or the replication study has disagreed with the original science paper.
The replicatability deficit in scientific research has a number of possible causes and is well covered in that wikipedia article.
I'm not so sure that the wider issues with AI include the vicious circle of GIGO that you seem to be postulating. Large language models do indeed learn from billions of pieces of information trawled from the internet, but with programmed error correction:
The models rely on a machine-learning system called a neural network. Such networks have a structure modeled loosely after the connected neurons of the human brain. The code for these programs is relatively simple and fills just a few screens. It sets up an autocorrection algorithm, which chooses the most likely word to complete a passage based on laborious statistical analysis of hundreds of gigabytes of Internet text. Additional training ensures the system will present its results in the form of dialogue. In this sense, all it does is regurgitate what it learned—it is a “stochastic parrot,”
Furthermore, experts are finding that these AI systems have developed unexpected "emergent abilities", far beyond that of a stochastic parrot. In other words, they are becoming smart enough to adapt their programmed limitations to solve problems and even imagine situations that should be beyond them.
This article primarily talks about using AI to spot fake research from years ago. That is a good thing. AI just makes it faster to make up garbage and so it makes it easier for bad actors.
I'm much more concerned about the ramifications of the job losses that are coming. Hidden in our 4% unemployed stats is a 7.9% unemployment rate for 20-24 year olds in the US (as of June 7). There aren't as many entry level job listings as there were last year. I make tech that uses AI to answer and follow up to apartment inquiries so I'm in the thick of it. So far it's solving a staffing issue, but it's only a matter of time before jobs are cut. I've already automated away the need for most entry level hires myself.
I am happy to see someone point out the true nature of AI, which is nothing more than a glorified search engine facilitated by the cloud information storage. The fact that the real strength of AI is its ability to generate multiple copies of erroneous information to create its own weighted response.
I find it more useful to think of AI as an extremely knowledgeable and patient teacher. That is how I use it.
However, that is narrow view. If you want to really understand then I recommend listening to Sam Altman. He was great in an interview I heard last year and here is what I plan to listen to later:
Content on HealthUnlocked does not replace the relationship between you and doctors or other healthcare professionals nor the advice you receive from them.
Never delay seeking advice or dialling emergency services because of something that you have read on HealthUnlocked.